By Feng Ye, Software Engineering Manager & Virtink Project Maintainer at SmartX

As more enterprises and applications migrate to cloud native architecture, the demand for more dynamic and flexible Kubernetes clusters grows. To achieve fault isolation, enterprise data centers typically require multiple independent Kubernetes clusters to support various services rather than deploying them on a shared cluster. Furthermore, because service loads change frequently, the scale of each Kubernetes cluster must change accordingly. The cluster has to adapt to the growing scale while freeing up the computing resources when the service loads decrease to reduce costs. These Kubernetes cluster operation and maintenance requirements can be referred to as Kubernetes as a service, or KaaS.

KaaS is an essential component of mainstream public cloud providers. It can quickly build production-ready and highly available Kubernetes clusters for users, as well as performing subsequent O&M tasks like cluster expansion and shrinking. In general, KaaS on the public cloud employs virtual machines as nodes in Kubernetes clusters to maximize the utilization of physical resources.

However, to achieve KaaS in the enterprise’s private data center, the IT team will have to start from scratch, including the procurement and integration of multiple systems such as the virtualization platform, Kubernetes management platform and user consoles. This process can consume a large amount of time and money. Moreover, traditional virtualization platforms integrate excessive functions to support extensive virtualization scenarios, which doesn’t appear to be lightweight and efficient when only Kubernetes clusters need to be supported. 

To solve this problem, SmartX launched the Virtink project, an open-source and Cloud-Hypervisor-based lightweight virtualization add-on for Kubernetes. We further launched knest, an open-source command line tool that allows users to easily create, operate and maintain Kubernetes clusters on Virtink with a single click. This tool will provide the foundation of KaaS capabilities to private data centers in a more direct, simple, and efficient manner.

This article will walk you through the process of using knest to manage Kubernetes clusters step by step.

Create Host Kubernetes Clusters

As mentioned previously, Virtink is a Kubernetes-native virtualization platform, which means it should be deployed and run on a Kubernetes cluster. We refer to this Kubernetes cluster as the host Kubernetes cluster, and the Kubernetes clusters running on Virtink VMs as the guest Kubernetes clusters.

Notably, the nodes of a host Kubernetes cluster do not need to be physical servers. Rather, they can be VMs that support nested virtualization. If you already have a Kubernetes cluster on hand, you can choose to take it as the host Kubernetes cluster provided that each node of the cluster supports KVM. If you do not have an available Kubernetes cluster, you can create a temporary Kubernetes cluster as the host cluster on your local machine using minikube. For instance:

minikube start --memory 6g --cni calico.yaml

Here, we choose to use Calico instead of minikube’s default CNI because Calico in the guest Kubernetes cluster requires that the CNI of the host Kubernetes cluster supports the IPIP packet from the pod. While Calico has the required feature, it is disabled by default, so we need to add an environmental variable to enable it:

curl -LO https://projectcalico.docs.tigera.io/manifests/calico.yaml
sed -i '/^            # Use Kubernetes API as the backing datastore.$/i \ \ \ \ \ \ \ \ \ \ \ \ - name: FELIX_ALLOWIPIPPACKETSFROMWORKLOADS\n              value: "true"' calico.yaml

Use knest to Create Guest Kubernetes Clusters

First, if you have not installed knest yet, please find the binary file for the platform from the latest release of the project and install it. For instance:

curl -LO https://github.com/smartxworks/knest/releases/download/v0.6.0/knest-linux-amd64
sudo install knest-linux-amd64 /usr/local/bin/knest

Next, use knest to create a guest Kubernetes cluster:

knest create demo --pod-network-cidr 10.245.0.0/16 --service-cidr 10.112.0.0/12

Notably, guest Kubernetes cluster’s pod network CIDR and service CIDR should not overlap with those of the host Kubernetes cluster. If your host Kubernetes cluster is a minikube cluster, by default its pod network CIDR is 10.244.0.0/16 and service CIDR 10.96.0.0/12. Thus in the above command, we choose adjacent CIDRs respectively.

Expand Guest Kubernetes Clusters

The Kubernetes VM cluster created above contains only one control plane node and no worker nodes. If your host Kubernetes cluster has sufficient computing resources, you can expand it by adding more control plane nodes or worker nodes with the knest scale command. 

If you use minikube as the host Kubernetes  cluster with at least 10G available memory on the local machine, you can expand the memory of the minikube cluster to 10G and add one worker node to the guest Kubernetes cluster using knest:

knest scale demo --worker-machine-count 1

Create Persistent Guest Kubernetes Clusters

The guest Kubernetes cluster created above uses the ContainerRootfs storage. The ContainerRootfs storage can take the directory in Docker image as the rootfs of the virtual machines, so it can fully leverage Docker’s tool chain to create a VM image. 

However, it lacks persistence. If the virtual machine is shut down and rebooted, its rootfs will be rebuilt from Docker image. This is to say, it is impossible to persistently store files in rootfs during the operation of virtual machines. 

If you want each node in the guest Kubernetes cluster to be persistent, you can use knest to achieve this. For example:

knest create demo --pod-network-cidr 10.245.0.0/16 --service-cidr 10.112.0.0/12 --persistent --host-cluster-cni calico --machine-addresses 10.244.0.100-10.244.0.110

What is worth noting is that the guest Kubernetes cluster nodes require static IPs. Therefore, we recommend you select CNIs that support static IPs for the host Kubernetes cluster, for instance, Calico or kube-ovn.

When using knest to create a persistent guest Kubernetes cluster, you need to provide the CNI type of host Kubernetes cluster through --host-cluster-cni. In this way, knest can mark the IP onto the VM in an appropriate manner and obtain the static IPs correctly through the CNI. 

Also, you need to provide a certain number of static IPs through --machine-addresses to meet guest Kubernetes cluster’s requirement for static IPs allocation. 

In a Word

It is not an easy job to implement Kubernetes as a Service in the enterprise data center as it requires the integration of multiple platforms, including the virtualization platform, Kubernetes management and O&M platform, and user consoles. Built on Virtink, knest enables creation, operation and maintenance of highly efficient, safe and low overhead Kubernetes virtualization clusters through one click, serving as an ideal building block for data center KaaS.

Continue Reading