By Mickaël D., DevOps Expert
Instead of a single comprehensive article on preparing for certification, as I have done in the past, I will be starting a series of articles that will help you better understand the different topics you will be tested on (if you wish to take the exam).

In this dossier, we will discuss:
- Cluster Setup,
- Cluster Hardening,
- System Hardening,
- Microservices Vulnerabilities,
- Supply Chain Security,
- Monitoring, Logging, and Runtime Security.
Network Security Policies
This first article will focus on the first aspect, namely Cluster Setup, and will cover several points such as:
- Network Security Policies,
- CIS Benchmark to verify the security configuration level of each Kubernetes component,
- Securing Ingress Controllers,
- Metadata protection for Nodes,
- Using the Kubernetes UI and securing it,
- Verifying binaries before deployment.
What are Network Security Policies?
These are instructions intended to specify how a Pod can communicate with network entities based on a combination of identifiers:
- The other pods (except for itself),
- The Namespaces with which it can communicate,
- IP blocks. Please note that in this case, a Pod can access any other Pod located on the same Node.
When defining a Security Policy for a Pod or Namespace, a Selector is used to specify which traffic is allowed to or from the Pod/Namespace associated with the selector.
PLEASE NOTE:
- By default, Pods accept traffic from any source.
- NetworkPolicies are cumulative and do not conflict with each other.
- There is only one prerequisite: a Network or CNI plugin that supports Network Policy!
Some examples:
1. Limit traffic to an application
yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: api-allow
spec:
podSelector:
matchLabels:
app: bookstore
role: api
ingress:
- from:
- podSelector:
matchLabels:
app: bookstore
2. Deny traffic outside the Kubernetes cluster
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: foo-deny-external-egress
spec:
podSelector:
matchLabels:
app: foo
policyTypes:
- Egress
egress:
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
- to:
- namespaceSelector: {}
3. Deny all non-whitelisted traffic to a namespace
yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {}
ingress: []
CIS Benchmarks
The CIS (Center for Internet Security) is a non-profit consortium that provides guides and tools for securing IT environments. It is similar tothe ANSSI in France.
By default, a system can be vulnerable to many types of attacks, and it is important to secure it as much as possible.Hardening refers to securing the configuration and operation of a system.
Two examples:
- In the case of a server on which USB ports are available and their use has not been planned, they must be disabled in order to prevent any type of attack via this vector.
- Which users have access to the system and can log in as root? If they make changes that impact the operation of services, it may be impossible to identify the author of the changes.
That is why best practices recommend disabling the root account and logging in with your own account, then elevating your privileges (using sudo).
Other examples: only services and/or file systems that are useful for the server's operation should be activated... Just as only the ports that are truly necessary should be opened, and therefore the firewall should be configured as finely as possible.
The website CIS offers numerous benchmarks andhardening frameworksfor :
- Linux, Windows, and OSX operating systems, as well as iOS and Android mobile operating systems,
- AWS, Azure, Google Cloud platforms,
- Network equipment from CheckPoint,Cisco, Juniper, and Palo Alto,
- As well as middleware such as Tomcat, Docker, and Kubernetes.
CIS Benchmark applied to Kubernetes: kube-bench
Kube-bench is an application developed in GO, relying heavily on the security recommendations of CIS with a highly scalable configuration thanks to the use of files in and can be used in several different ways: yaml
- Installing from a container
- Installation using the downloaded binary file
- By compiling the sources
- By running it from a container, whether it is an isolated container or in a Kubernetes or Openshift cluster.
Kube-bench runs the checks defined in a file. appointed yaml and can be applied to any type of control.yaml master or worker type (regardless of version).nodes
id: 1.2
text: Scheduler
checks:
- id: 1.2.1
text: "Ensure that the --profiling argument is set to false (Scored)"
audit: "ps -ef | grep kube-scheduler | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--profiling"
set: true
- flag: "--some-other-flag"
set: false
remediation: "Edit the /etc/kubernetes/config file on the master node and set the KUBE_ALLOW_PRIV parameter to '--allow-privileged=false'"
scored: true
Once the tests have been performed by kube-bench, the results are saved in a file and each check indicates a status based on the following four:
- PASS, The check was performed successfully.
- FAIL, the check has failed, but the remediation describes how to correct the configuration so that the next check will be OK.
- WARN, the test requires special attention; check the remediation for more information. This is not necessarily an error report.
- INFO
Using and securing the Kubernetes UI
The aim here is to learn how to install the Kubernetes Dashboard and secure it as effectively as possible.
The Kubernetes Dashboard is deployed using the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml` and to access it: `kubectl proxy --address 0.0.0.0. It is important to note that:
- Kubernetes supports four authentication modes:
- These are managed by the Kubernetes API,
- The Dashboard only serves as a proxy and acts as a conduit to Kubernetes for all authentication-related information.
Attacks via the Kubernetes front-end are numerous and have long been the primary attack vector.
Authentication
As mentioned earlier, Kubernetes supports four authentication modes from among the following:
- Bearer Token,
- Username/Password,
- Kubeconfig,
- Authorization Header (supported since version 1.6 and has the highest priority).
Bearer Tokens
To configure them as effectively as possible, you need to be extremely familiar with the concepts of Service Account, Role, Cluster Role, and the permissions that can be assigned to them.
ServiceAccount
yaml
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
EOF
ClusterRoleBinding
yaml
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Once the two objects have been created, we can obtain the Bearer Token using the following command:kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
Username/Password
Disabled by default, because the Kubernetes API requires configuration at the attribute level rather than at the role level.
Kubeconfig
With this authentication mode, only the options specified by the flag authentication mode are handled in the file aforementionedOtherwise, an error will be displayed in the Dashboard.
Authorization Header
For this type of authentication method, the Bearer Token will be required because you will need it for each request to the Dashboard.
Pre-installation binary verification
This simply involves checking the hash value of the Kubernetes binaries and comparing them with those you download to your server...in case your action is intercepted by a hacker.
Even if the so-called "hacker" modifies a file in the archive, it is important to note that any modification to an archive also modifies its hash value.
To do this, once the download is complete, you can run the following command: and compare the value obtained with what is displayed on the download page.shasum -a 512 kubernetes.tar.gz
For example, by downloading:
kubeadm version 1.21.0 with hash value 7bdaf0d58f0d286538376bc40b50d7e3ab60a3fe7a0709194f53f1605129550f
I will obtain the same value of , once the archive has been downloaded (this time using the command hash) shasum -a 256 kubeadm
Securing Ingress Controllers
First of all: what is an Ingress Controller?
This is aKubernetes object that manages external access to services in a cluster, typically HTTP traffic, and can also provide load balancing features.
The topics covered in this section will be:
- Creating TLS certificates,
- Creating secrets (incorporating TLS certificates) in Kubernetes,
- The configuration of Ingress Controllers incorporating secrets.
Creating a TLS certificate
Assuming you have already created your Ingress Controller and that it is accessible via HTTP, the next step is to secure its content using HTTPS...and therefore create a self-signed TLS certificate:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
Creation of secrecy
Once the key/certificate pair has been created, and unless you have already deployed certificate manager in Kubernetes, you must create a secret in order to subsequently integrate the TLS certificate into the Ingress controller :
kubectl create secret tls-secure-ingress --cert=cert.pem --key=key.pem
Configuring the Ingress Controller
Once the TLS certificate and secret have been created, the final step is to integrate the secret into the Ingress Controller.
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- test-secure-ingress.com
secretName: tls-secure-ingress
rules:
- host: test-secure-ingress.com
http:
paths:
- path: /service1
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
- path: /service2
pathType: Prefix
backend:
service:
name: service2
port:
number: 80
Node metadata protection
Metadata Nodes... What are they?
All virtual machines (nodes) hosted in the Cloud should access the Endpoints of the Metadata Nodes for various reasons.However, allowing access to all resources is not recommended.
To improve security at this level, Network Policies must be applied so that only designated resources within Kubernetes Clusters can contact the Endpoints of Metadata Nodes.
All the information you may need on this subject can be found here:
Network Policies
The principle behind Network Policies is to manage Pod-to-Pod, Pod-to-Service, and External-to-Service communication in the same way as a firewall. Administrators are responsible for identifying the resources affected using labels, namespaces, or IP addresses.
The official documentation gives, for example, the following:
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Here, we are mainly interested in the possibility of blocking access to Metadata Endpoints using Network Policies for one or more Pods (see below):
Deny traffic to metadata
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cloud-metadata-deny
namespace: default
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- <POD_IP_ADDRESS>
Allow traffic to metadata
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cloud-metadata-allow
namespace: default
spec:
podSelector:
matchLabels:
role: metadata-accessor
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- <POD_IP_ADDRESS>
The next article will address all issues related to cluster hardening, namely:
- Restrictions on access to the Kubernetes API ,
- The use of Role-Based Access Control to minimize exposure,
- Fine-tuning ServiceAccounts,
- The frequency of Kubernetes updates.
