Kubernetes has become the de facto framework to deploy and operate containerized applications in a private cloud and also public cloud. All major public clouds have managed Kubernetes Services (Google GKE, Azure AKS, AWS EKS).
Kubernetes is a very powerful framework, but it is also a complex framework and it has a steep learning curve. The objective of this article is to help you handle the steep learning curve by giving you easy examples of how to start taming Kubernetes.
NOTE 1: Installation “step by step” of a Kubernetes cluster can have slight variations due to different Kubernetes versions and different Host OSes. If you are new to Kubernetes we recommend to play with the Kubernetes cluster deployed in the free Kubernetes classroom ( https://training.play-with-kubernetes.com/ ) and avoid installing a cluster until you get a little experience. In the free Kubernetes classroom, you can test the examples provided in this article.
Let’s start by explaining the fundamentals of a Kubernetes cluster. A cluster consists of one or several nodes. A node can be a physical machine or a virtual machine. Each node has a role. Most common roles are control-plane and worker. It is mandatory to have at least one control-plane node.
Below we can see our lab has one control-plane node. There are other 2 nodes whose role say <none> . Those 2 nodes are workers.
auben@dev-server-0:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
dev-server-0 Ready control-plane 59d v1.28.12
dev-server-1 Ready <none> 52d v1.28.12
dev-server-2 Ready <none> 52d v1.28.12
Let ‘s find more information about our nodes by using “- owide” option. We can see all nodes are running Kubernetes 1.28.12 on Ubuntu 20.04.6 LTS. Also all nodes are using containerd as the container runtime.
auben@dev-server-0:~$ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev-server-0 Ready control-plane 59d v1.28.12 192.168.150.110 <none> Ubuntu 20.04.6 LTS 5.4.0-169-generic containerd://1.7.19
dev-server-1 Ready <none> 52d v1.28.12 192.168.150.111 <none> Ubuntu 20.04.6 LTS 5.4.0-193-generic containerd://1.7.19
dev-server-2 Ready <none> 52d v1.28.12 192.168.150.112 <none> Ubuntu 20.04.6 LTS 5.4.0-169-generic containerd://1.7.19
Now we bring our attention to pods and containers:
- Kubernetes is a pod orchestrator. But what is a pod ? A pod is a collection of one or more containers. Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
- Kubernetes needs a “container runtime” in order for the pods to run. Some popular options are containerd and CRI-O.
- Kubernetes needs networking to bring connectivity to the pods. Some popular options are calico and flannel.
As seen in the image above, there are several “kube-system” pods installed by default in all nodes. We can check the pods with the following command:
auben@dev-server-0:~$ kubectl get pods
No resources found in default namespace.
We don’t see any pod at all in the default namespace.
NOTE 3: A namespace is a mechanism to isolate resources in a Kubernetes cluster. It is good practice to create and use different namespaces to organize resources and create policies specific to the namespaces. For example, you can create a “development” namespace and a “production” namespace. Then you can create a user and deny him access to resources from the “production” namespace but allow him access to resources from the “development” namespace.
Let’s find what namespaces exist by default in our cluster.
auben@dev-server-0:~$ kubectl get ns
NAME STATUS AGE
default Active 60d
kube-node-lease Active 60d
kube-public Active 60d
kube-system Active 60d
Let’s use the –n flag in the command “kubectl get pods” to check the pods in every namespace. Below we can see that right now only the namespace “kube-system” has pods.
auben@dev-server-0:~$ kubectl get pods -n default
No resources found in default namespace.
auben@dev-server-0:~$ kubectl get pods -n kube-node-lease
No resources found in kube-node-lease namespace.
auben@dev-server-0:~$ kubectl get pods -n kube-public
No resources found in kube-public namespace.
auben@dev-server-0:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-74d5f9d7bb-ct6w5 1/1 Running 20 (3h20m ago) 60d
calico-node-6n2fk 1/1 Running 0 60d
calico-node-8fgks 1/1 Running 0 53d
calico-node-zz9ww 1/1 Running 1 (20d ago) 53d
coredns-5dd5756b68-fjdbq 1/1 Running 0 60d
coredns-5dd5756b68-wk96r 1/1 Running 0 60d
etcd-dev-server-0 1/1 Running 0 60d
kube-apiserver-dev-server-0 1/1 Running 23 (3h20m ago) 60d
kube-controller-manager-dev-server-0 1/1 Running 655 (3h19m ago) 60d
kube-proxy-gn4c9 1/1 Running 0 60d
kube-proxy-hw6rv 1/1 Running 1 (20d ago) 53d
kube-proxy-srmjb 1/1 Running 0 53d
kube-scheduler-dev-server-0 1/1 Running 659 60d
Above we see there are lots of pods. But it is better to see which pods are deployed in which node. To do that we use the “-owide” flag and grep with the name of each node. Let’s begin with the 2 workers. We see each worker has 2 pods: calico-node and kube-proxy.
auben@dev-server-0:~$ kubectl get pods -n kube-system -owide | grep dev-server-1
calico-node-zz9ww 1/1 Running 1 (20d ago) 53d 192.168.150.111 dev-server-1 <none> <none>
kube-proxy-hw6rv 1/1 Running 1 (20d ago) 53d 192.168.150.111 dev-server-1 <none> <none>
auben@dev-server-0:~$ kubectl get pods -n kube-system -owide | grep dev-server-2
calico-node-8fgks 1/1 Running 0 53d 192.168.150.112 dev-server-2 <none> <none>
kube-proxy-srmjb 1/1 Running 0 53d 192.168.150.112 dev-server-2 <none> <none>
Let’s see the pods in the control-plane node. We notice there are many more pods in a control-plane node than in a worker node.
auben@dev-server-0:~$ kubectl get pods -n kube-system -owide | grep dev-server-0
calico-kube-controllers-74d5f9d7bb-ct6w5 1/1 Running 20 (3h26m ago) 60d 10.244.32.65 dev-server-0 <none> <none>
calico-node-6n2fk 1/1 Running 0 60d 192.168.150.110 dev-server-0 <none> <none>
coredns-5dd5756b68-fjdbq 1/1 Running 0 60d 10.244.32.66 dev-server-0 <none> <none>
coredns-5dd5756b68-wk96r 1/1 Running 0 60d 10.244.32.67 dev-server-0 <none> <none>
etcd-dev-server-0 1/1 Running 0 60d 192.168.150.110 dev-server-0 <none> <none>
kube-apiserver-dev-server-0 1/1 Running 23 (3h27m ago) 60d 192.168.150.110 dev-server-0 <none> <none>
kube-controller-manager-dev-server-0 1/1 Running 655 (3h26m ago) 60d 192.168.150.110 dev-server-0 <none> <none>
kube-proxy-gn4c9 1/1 Running 0 60d 192.168.150.110 dev-server-0 <none> <none>
kube-scheduler-dev-server-0 1/1 Running 659 60d 192.168.150.110 dev-server-0 <none> <none>
Now let’s find out what networking component in our lab. Below you can see our lab has calico as the networking component.
auben@dev-server-0:~$ sudo ls -hal /etc/cni/net.d
total 16K
drwx------ 2 root root 4.0K Jul 22 02:31 .
drwxr-xr-x 3 root root 4.0K Jul 22 01:42 ..
-rw-r--r-- 1 root root 663 Jul 22 02:31 10-calico.conflist
-rw------- 1 root root 2.7K Sep 19 21:27 calico-kubeconfig
Below we can see Kubernetes is using the IP range “10.244.0.0/16” as the cluster CIDR and Kubernetes is subnetting it in order to give connectivity to the pods.
auben@dev-server-0:~$ kubectl cluster-info dump | grep cluster-cidr
"--cluster-cidr=10.244.0.0/16",
auben@dev-server-0:~$ kubectl cluster-info dump | grep "podCIDR\|Tunnel"
"projectcalico.org/IPv4IPIPTunnelAddr": "10.244.32.64",
"podCIDR": "10.244.0.0/24",
"podCIDRs": [
"projectcalico.org/IPv4IPIPTunnelAddr": "10.244.217.0",
"podCIDR": "10.244.1.0/24",
"podCIDRs": [
"projectcalico.org/IPv4IPIPTunnelAddr": "10.244.176.0",
"podCIDR": "10.244.2.0/24",
"podCIDRs": [
We just finished doing our first reconnaissance of our Kubernetes cluster lab. Now we are ready to deploy our first pod. We are going to deploy a NGINX container. A container registry is needed to download the images. The container registry is external to the Kubernetes cluster. By default Kubernetes looks for an image in https://hub.docker.com which is a public and free container registry.
Kuberenetes can use a declarative approach. It means it can use files to describe the configuration of a resource and Kubernetes will try to enforce the configuration at all times.
Kubernetes use YAML files to describe the configuration. Below we can see a file that will be used to create a deployment with one pod created with the container image of nginx.
auben@dev-server-0:~$ more nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
NOTE 4: In the line 6 of the file “deployment.yaml” we see replicas :1 A replica is the number of instances of a pod that should be running at any time. Later in this article we will expirement changing the number of replicas to 2.
Now we have to apply the file to create our deployment and our pod. At first you will see the status “ContainerCreating”for a minute or two because it will take some time to download the image from hub.docker.com. Later we should see the status “Running”
auben@dev-server-0:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/my-nginx created
auben@dev-server-0:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-f4f77479-vhjf5 0/1 ContainerCreating 0 3s
auben@dev-server-0:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-f4f77479-vhjf5 1/1 Running 0 53s
Our pod of nginx is running but we can’t access it because it doesn’t have a service associated to it. Below we can see the only service created so far is the default service of Kubernetes Cluster.
auben@dev-server-0:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 60d
Now we create a YAML file and apply it to create a Kubernetes service for nginx. This service will select the pod nginx and publish the service in port 80.
auben@dev-server-0:~$ more nginx-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-service
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
auben@dev-server-0:~$ kubectl apply -f nginx-service.yaml
service/nginx-service created
auben@dev-server-0:~$ kubectl get services -owide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 60d <none>
nginx-service ClusterIP 10.98.94.39 <none> 80/TCP 9s app=nginx
Now we use curl to test the web page of NGINX.
It works! But please notice we are using an internal IP to do the testing. What if we want to test it from another PC?. There are several ways for Kubernetes to publish a service for external access. The simplest way is to use port-forwarding.
NOTE 5: Port-forwarding should only used for testing and not for production. It’s better to use a “load balancer” service and even better to use a Kubernetes “ingress” resource.
Below we see we are using a host with IP 192.168.150.110.
auben@dev-server-0:~$ ip addr show ens160 | grep inet
inet 192.168.150.110/24 brd 192.168.150.255 scope global ens160
The flag --address="0.0.0.0" of the command below means any IP of the host, which of course includes 192.168.150.110. Any packet that reaches port 8080 of the host will be forwarded to port 80 of the container.
auben@dev-server-0:~$ kubectl port-forward service/nginx-service 8080:80 --address="0.0.0.0"
Forwarding from 0.0.0.0:8080 -> 80
Handling connection for 8080
Now we can use any PC that has connectivity to our host 192.168.150.110. We browse to http://192.168.150.110:8080 and we get the “Welcome to nginx!” message.
Earlier we said that the declarative approach enforces the configuration at all times. The YAML file of the NGINX deployment says “replicas: 1”. What will happen if we manually delete the only pod ? Kubernetes will detect the deployment does not comply and will try to bring another pod . Below we see we manually delete the pod my-nginx-f4f77479-vhjf5 and Kubernetes automatically created another pod my-nginx-f4f77479-phwtw to comply with the policy.
auben@dev-server-0:~$ kubectl delete pod/my-nginx-f4f77479-vhjf5
pod "my-nginx-f4f77479-vhjf5" deleted
auben@dev-server-0:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-f4f77479-phwtw 0/1 ContainerCreating 0 8s
auben@dev-server-0:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-f4f77479-phwtw 1/1 Running 0 12s
We can also check what just happened by looking the events, as seen below.
auben@dev-server-0:~$ kubectl events
LAST SEEN TYPE REASON OBJECT MESSAGE
19s Normal Killing Pod/my-nginx-f4f77479-vhjf5 Stopping container nginx-container
19s Normal SuccessfulCreate ReplicaSet/my-nginx-f4f77479 Created pod: my-nginx-f4f77479-phwtw
18s Normal Scheduled Pod/my-nginx-f4f77479-phwtw Successfully assigned default/my-nginx-f4f77479-phwtw to dev-server-1
15s Normal Pulling Pod/my-nginx-f4f77479-phwtw Pulling image "nginx"
13s Normal Pulled Pod/my-nginx-f4f77479-phwtw Successfully pulled image "nginx" in 2.58s (2.58s including waiting)
12s Normal Created Pod/my-nginx-f4f77479-phwtw Created container nginx-container
11s Normal Started Pod/my-nginx-f4f77479-phwtw Started container nginx-container
Finally we can also modify our deployment file to tell Kubernetes to have 2 replicas instead of 1.
auben@dev-server-0:~$ more nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
We apply the changes and now we see Kubernetes has 2 NGINX pods deployed.
auben@dev-server-0:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/my-nginx configured
auben@dev-server-0:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-f4f77479-j4tst 0/1 ContainerCreating 0 4s
my-nginx-f4f77479-phwtw 1/1 Running 0 11m
auben@dev-server-0:~$ kubectl events
2m20s Normal ScalingReplicaSet Deployment/my-nginx Scaled up replica set my-nginx-f4f77479 to 2 from 1
2m19s Normal SuccessfulCreate ReplicaSet/my-nginx-f4f77479 Created pod: my-nginx-f4f77479-j4tst
2m17s Normal Scheduled Pod/my-nginx-f4f77479-j4tst Successfully assigned default/my-nginx-f4f77479-j4tst to dev-server-2
2m13s Normal Pulling Pod/my-nginx-f4f77479-j4tst Pulling image "nginx"
2m11s Normal Pulled Pod/my-nginx-f4f77479-j4tst Successfully pulled image "nginx" in 2.234s (2.234s including waiting)
2m10s Normal Started Pod/my-nginx-f4f77479-j4tst Started container nginx-container
2m10s Normal Created Pod/my-nginx-f4f77479-j4tst Created container nginx-container
This was a brief introduction to deploy and manage an application with Kubernetes. This is the tip of the iceberg. Kubernetes has many resources to get accurate metrics of your application (for example cpu usage, memory usage, HTTP requests per second, etc) . Then Kubernetes can leverage the metrics to do autoscaling (Vertical pod autoscaler, horizontal pod autoscaler). You can use tools like Helm and Kustomize to manage the Kubernetes manifest files of several clusters and quick deployment of complex applications. The universe of Kubernetes is almost endless.
You can continue your Kubernetes journey with the following resources
- https://kubernetes.io/docs/reference/kubectl/quick-reference/
- https://training.linuxfoundation.org/resources/?_sft_content_type=free-course&_sf_s=kubernetes