Deploying Kubernetes On Ubuntu 20.04

D

So you’re finally ready to take your use of containers to the next level with the premium standard of container orchestration but maybe like me, you aren’t a fan of Red Hat Enterprise Linux / openSUSE environments. No worries, there are some reasonably simple steps that will get you to a working Kubernetes cluster deployment on Ubuntu in almost no time at all.

To get started, you need at least one clean Ubuntu 20.04 environment, ideally at least two Ubuntu 20.04 environments to separate your control plane and worker nodes. For high availability environments, you should have at least three control plane nodes and at least two worker nodes. You don’t have to create all the additional nodes at once as adding them later isn’t much of a chore. The instructions that follow will assume you’re working with at least two nodes, one for your control plane, and one for your workload. It is very important that every node you intend to join to the same cluster has a unique hostname configured. It does not matter if two nodes are located in different network search domains, their hostname must specifically be unique to the cluster.

Planning Cluster Configuration

Before you start executing commands, you need to get all your ducks in a row with a few pieces of configuration information. The following is an example of the configuration used in the Azorian Solutions production Kubernetes cluster:

             Cluster Domain: k8s.azorian.solutions
     Control Plane Endpoint: cp.k8s.azorian.solutions
          Pod Networks CIDR: 10.255.0.0/16
      Service Networks CIDR: 10.10.10.0/23
            Kubelet Version: 1.22.0-00
            Kubeadm Version: 1.22.0-00
            Kubectl Version: 1.22.0-00

The first configuration item is the cluster domain. Think of this much like you would a network search domain although it isn’t necessarily used in the same way.

The next configuration item is the control plane endpoint. This is typically a domain name containing an A record corresponding to the IP address of each control plane node that will operate in the cluster. Initially, this domain should only contain a single A record that points to the IP address of the first control plane node that you intend to initialize the Kubernetes cluster on. If you miss this detail during setup, it could lean to painful results forcing you to start the entire process all over again (assuming you don’t have great proficiency in Kubernetes clusters to fix it).

Moving on, the pod networks CIDR should be a fairly large private network allocation as this will be carved up into many smaller networks for each pod that gets deployed to the cluster. Remember, this could easily equate to one network per container deployed depending on how you deploy your services so don’t get stingy on address space. This is not something that is easily changed after the fact. The allocation chosen here should not be your easiest to remember as you likely won’t spend much if any time directly interacting with the IP addresses that are assigned from these networks. It’s also important to remember that the addresses automatically assigned from this network are ephemeral in nature in accordance with the destruction and recreation of pods that happen automatically from time to time.

Next up is the service networks CIDR. Until you get familiar with exactly how Kubernetes works, I recommend starting with a /23 private network as this will ensure you don’t likely end up in a situation where you run out of space from deploying services to the cluster. Services in Kubernetes are like a public facing gateway to your various pods (which contain one or more containers). You can think of a service similar to how you would think of a load balancer. It will receive communication requests to specific IP addresses and ports which are then automatically routed to any pods that have been linked to the service.

Finally, you have your Kubelet, Kubeadm, and Kubectl versions. You can select any of the available Kubernetes release versions here. It is considered best practice for compatibility reasons to keep all three of these packages in sync with at least major and minor versions. If you know what you’re doing, you can install different versions of each but you may experience problems by doing so if there is a breaking change between two selected versions. Version 1.23 hasn’t been out long and I haven’t tested it so for this tutorial, we’ll stick with version 1.22.0-00 which is what I am running today.

Preparing The Nodes

Let’s start by running a number of commands to get each node of the Kubernetes cluster prepared for deployment. Begin by opening up a command shell to each of the nodes with a non-root user that has super user (sudo) privileges.

First, you need to disable the use of swap partitions. Execute the following commands on each node:

sudo sed -i 's/^\/swap/#\/swap/g' /etc/fstab
sudo swapoff -a

Next, you need to update the OS to the latest packages for the current distribution. Execute the following commands on each node:

sudo apt update
sudo apt upgrade -y

Now it’s time to install some basic dependencies to support the remainder of the installation process. Execute the following command on each node:

sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release net-tools containerd

Now you need to load the “overlay” and “br_netfilter” Linux kernel modules. Execute the following commands on each node:

sudo modprobe overlay
sudo modprobe br_netfilter

The previous commands only load kernel modules temporarily. You need to add a configuration file to ensure that these modules are loaded after each machine reboot. Execute the following command on each node:

echo 'overlay
br_netfilter' | sudo tee /etc/modules-load.d/k8s.conf > /dev/null

Now you need to add a kernel configuration file to tweak three networking settings. Execute the following command on each node:

echo 'net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' | sudo tee /etc/sysctl.d/99-k8s.conf > /dev/null

The first of the three aforementioned settings instruct the kernel to accept and forward IP packets not intended for the host machine, but instead one or more of the guests running on the host. The second and third settings control whether or not packets passing through a Linux networking bridge are passed to iptables for processing.

Now you need to instruct the kernel to load the new kernel configuration file to apply the changes. Execute the following command on each node:

sudo sysctl --system

Now you need to install the GPG key for the Google APT repository that contains the Kubernetes packages you’ll need. Execute the following command on each node:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Now you need to add a configuration file for the APT package manager to make the system aware of the Kubernetes package repository hosted by Google. Execute the following command on each node:

echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list > /dev/null

Now you need to update APT’s package cache to download the latest package information from the Kubernetes repository. Execute the following command on each node:

sudo apt update

Now you need to set some temporary environment variables that will be used during the Kubernetes package installation process. Execute the following commands on each node, taking care to update each variable with the appropriate versions that you chose for Kubelet, Kubeadm, and Kubectl at the beginning of this tutorial:

kubelet_version=1.22.0-00
kubeadm_version=1.22.0-00
kubectl_version=1.22.0-00

Now you are finally ready to install the Kubernetes packages. Execute the following commands on each node:

KUBELET_EXTRA_ARGS='--feature-gates="AllAlpha=false,RunAsGroup=true"
  --container-runtime=remote --cgroup-driver=systemd
  --runtime-request-timeout=5m
  --container-runtime-endpoint="unix:///var/run/containerd/containerd.sock"'

sudo apt install -y --allow-change-held-packages \
  kubelet=$kubelet_version kubeadm=$kubeadm_version \
  kubectl=$kubectl_version kubernetes-cni

Now there is one final dependency to install which is the control script for the Calico CNI since this tutorial will be making use of Calico as the container networking interface (CNI). Execute the following commands on each node:

export calico_version=$(curl --silent "https://api.github.com/repos/projectcalico/calico/releases/latest" | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')

curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/81.0" -o kubectl-calico -L https://github.com/projectcalico/calico/releases/download/$calico_version/calicoctl-linux-amd64

sudo chown root:root kubectl-calico
sudo chmod +x kubectl-calico
sudo mv kubectl-calico /usr/local/bin/
sudo ln -s /usr/local/bin/kubectl-calico /usr/local/bin/calicoctl

Alright, that completes the node preparation phase of the deployment! I promise the next phase will be much quicker.

Initializing The Cluster

It’s finally time to initialize the Kubernetes cluster now that you have all of the nodes in a prepared state. If you are running the nodes in virtualization environment like Proxmox or ESXI, you may want to consider doing a quick snapshot of each node just in case something doesn’t go right with the initialization process. This will give you a relatively simple way to revert the coming changes and try again without having to completely rebuild the cluster nodes all over again.

Let’s start by setting some additional temporary environment variables to assist with building a cluster configuration file. Execute the following commands on the first control plane node only! Take care to update each variable to the corresponding configuration value chosen at the beginning of this tutorial.

cluster_domain=k8s.azorian.solutions
control_plane_endpoint=cp.k8s.azorian.solutions
pod_networks_cidr=10.255.0.0/16
service_networks_cidr=10.10.10.0/23

Now you need to create a cluster initialization file in YAML format. Execute the following command on the first control plane node only!

echo 'kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
nodeRegistration:
  criSocket: /var/run/containerd/containerd.sock
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
controlPlaneEndpoint: '"$control_plane_endpoint"':6443
networking:
  dnsDomain: '"$cluster_domain"'
  podSubnet: '"$pod_networks_cidr"'
  serviceSubnet: '"$service_networks_cidr"'
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1' | tee /tmp/kubeadm-config.yaml

Now you can finally initialize the first node of the cluster! Execute the following command on the first control plane node only!

sudo kubeadm init --config /tmp/kubeadm-config.yaml --upload-certs

If all goes well with the cluster initialization, then the aforementioned command should produce some output that provides two different commands beginning with “kubeadm join” that you need to capture for use a little later in this tutorial.

Now you need to copy the admin security configuration file of the cluster to the appropriate location in the current user’s home directory so that you can execute various cluster administration commands as the current user without the user of super user (sudo) privileges. Execute the following command on the first control plane node only!

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You now have a bare bones Kubernetes cluster! You aren’t done quiet yet though as this cluster doesn’t have any container networking interface. Read on.

Installing Calico CNI

This step should prove to be the easiest. Execute the following command on the first control plane node only!

curl https://docs.projectcalico.org/manifests/calico.yaml | kubectl apply -f -

Alright, now the cluster has a container network interface installed and is ready for use if you are only working with a single node. Otherwise, read on.

Initializing The Remaining Nodes

If you are setting up more than one node for your Kubernetes environment, then you need to execute the “kubeadm join” commands that you captured during cluster initialization in an earlier step. For each of the worker nodes you are going to join to the cluster, execute the shorter of the two commands (the one that does not contain the –control-plane parameter) on the node.

If you are setting up more than one control plane nodes for your Kubernetes environment, then you need to execute the longer of the two commands (the one that does contain the –control-plane parameter) on each remaining control plane nodes. Additionally, for ease of operation, also execute the following commands on each of the additional control plane nodes:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

There is one final step to ensure that the Kubernetes cluster is ready to start deploying components. Each node that will be handling workloads needs to have a Kubernetes worker role label attached. Run the following command from a control plane node for each worker node in the cluster taking care to substitute the “HOSTNAME-HERE” string with the node’s configured hostname. If you aren’t sure of the node’s hostname, you can determine this by executing the “hostname” command on the node in question.

kubectl label node HOSTNAME-HERE node-role.kubernetes.io/worker=worker

That’s it! You now have an operational Kubernetes cluster environment. I don’t want to toot your horn, but you’re kind of a big deal now.

I recommend you check out this post next to take your Kubernetes deployment to the next level.

About the author

Add Comment

By Matt

Matt

Get in touch

If you would like to contact me, head over to my company website at https://azorian.solutions.