Kubernetes is the most popular container orchestrator because of it’s power and portability.
You can model your apps in Kubernetes and deploy the same specs on the desktop, in the datacentre and on a managed Kubernetes service in the cloud. Get a better deal from another cloud? Just deploy your apps there with no changes and edit your DNS records.
In this episode you’ll see how to deploy Kubernetes in lab VMs and run some simple applications. The application specs will include production concerns like healthchecks, resource restrictions and security settings.
Here’s it is on YouTube - ECS-O3: Containers in Production with Kubernetes
Kubernetes Pod configuration (links to resource allocation, health probes, affinity and volumes)
Kubernetes API docs (v 1.18)
I’m running Linux VMs for the Kubernetes cluster using Vagrant.
You can set the VMs up with:
cd episodes/ecs-o3/vagrant vagrant up
Initialize a new cluster from the control plane VM:
vagrant ssh control sudo docker version ls /usr/bin/kube* sudo kubeadm init --pod-network-cidr="10.244.0.0/16" --service-cidr="10.96.0.0/12" --apiserver-advertise-address=$(cat /tmp/ip.txt)
kubeadm joincommand from the output
Set up kubectl:
mkdir ~/.kube sudo cp /etc/kubernetes/admin.conf ~/.kube/config sudo chmod +r ~/.kube/config
Confirm the cluster is up:
kubectl get nodes
The cluster isn’t ready because Kubernetes has a pluggable networking layer, and no network is deployed by default.
Deploy the Flannel network:
cd /ecs-o3 kubectl apply -f kube-flannel.yaml kubectl -n kube-system wait --for=condition=ContainersReady pod -l k8s-app=kube-dns kubectl get nodes sudo docker ps
Nodes need a container runtime and kubeadm installed. The Vagrant VMs are ready to go.
Join the first node:
vagrant ssh node sudo kubeadm join [full command from control] exit
And the second node:
vagrant ssh node2 sudo kubeadm join [full command from control] exit
Back on the control plane, check node status:
vagrant ssh control kubectl get nodes -o wide
Kubernetes YAML is quite verbose, because of the multiple abstractions you use to model your apps (networking, compute and storage).
This demo app runs across three components and shows NASA’s Astronomy Picture of the Day. We’ll deploy the app to its own namespace.
You can define multiple Kubernetes objects in each YAML file - how you set it up is your choice:
Deploy the specs:
cd /ecs-o3/ kubectl apply -f apod/ kubectl get all -n apod
Wait for the web Pod to be ready:
kubectl wait --for=condition=ContainersReady pod -n apod -l app=apod-web echo "$(cat /tmp/ip.txt):30000"
Browse to the app
Check the backend API logs:
kubectl logs -n apod -l app=apod-api kubectl logs -n apod -l app=apod-log
The todo-list application specs add more production concerns like resource limits and health probes.
The database components for the todo-list app are in db.yaml.
Deploy the todo-list app:
cd /ecs-o3 kubectl apply -f todo-list/ kubectl get all -n todo-list
The web.yaml Deployment spec has some more settings - it specifies affinity so the web Pods run on the same node as the database Pods.
Check the Pod locations:
kubectl get pods -n todo-list -o wide echo "$(cat /tmp/ip.txt):30020"
Browse and add item
Required affinity is a hard rule which limits the orchestrators ability to scale. The update/web.yaml spec shifts to preferred affinity.
Deploy the update and watch it roll out:
kubectl apply -f todo-list/update/ kubectl get pods -n todo-list -o wide --watch
Removing a namespace will remove all the components, so you can easily clean the cluster.
Delete namespaces and leave the Swarm:
kubectl get namespace --show-labels kubectl delete ns -l ecs=o3
Or you can exit the SSH session and delete all the VMs.
Remove the VMs:
exit vagrant destroy