By Feng Ye, Software Engineering Manager at SmartX

Today, we are excited to announce the Virtink project, an open-source and lightweight virtualization add-on for Kubernetes. 

Unlike the KubeVirt project which supports the emulation of legacy hardware and the desktop virtualization scenarios, Virtink is focused on running modern cloud-based virtualized workloads on Kubernetes. Virtink is built on top of the modern Cloud Hypervisor and can be installed on Kubernetes with hardware assisted virtualization enabled x86 or ARM CPUs.

Moreover, we are also releasing the knest open-source project. Knest is a very useful command line tool that can conveniently support the Kubernetes-in-Kubernetes  scenarios leveraging the capabilities of Virtink. This enables DevOps teams to easily create any virtualized Kubernetes clusters on an existing bare metal Kubernetes cluster without a traditional virtualization platform.

Background

Since its birth, Kubernetes has brought revolutionary changes to the cloud native landscape. With powerful distributed scheduling capability, intuitive cluster resource abstractions and rich third-party extensions, it has become the de facto cloud native platform and is increasingly attracting more applications and middleware to migrate on towards.

However, in the earlier age of cloud computing and virtualization, a lot of applications were running in virtual machines. While some applications have been modified and even rewritten to adopt Kubernetes in the trending of containerization, there are still quite many applications that haven’t been containerized, failing to leverage the benefits of Kubernetes. 

Moreover, some newly developed system-level applications may need to use a kernel different from that of the host machine and thus require a fully virtualized runtime environment. This leads to a difficult yet ever prominent situation where containers and virtual machines co-exist but cannot be co-managed, making it hard and costly to operate and maintain. Enabling Kubernetes to host both containerized and virtualized workloads is a viable solution.

As a pioneer in this field, the KubeVirt project has caught a lot of attention over the years. KubeVirt is more focused on supporting the traditional virtualization workloads. However, with the increasing functionalities of traditional virtualization platforms, the complexity of the code base is growing and the additional overhead for running each virtual machine is not negligible. Virtink, on the other hand, is entirely focused on modern cloud workloads. Built on the latest virtualization technology, Virtink provides users with a more lightweight and secure virtualization management engine for cloud workloads.

KubeVirt is Designed for Traditional Virtualization Workloads

In order to provide most capabilities of traditional virtualization platforms, KubeVirt chose libvirt and QEMU to be the underlying hypervisor. KubeVirt needs to run libvirtd and launcher processes in each pod that holds a virtual machine. To our observation, each libvirtd process occupies more than 30MB memory and each launcher process occupies about 80MB memory, which means that the additional memory overhead of each virtual machine is more than 110MB. When there is a certain number of virtual machines, this memory overhead can be quite significant.

Moreover, the virtual disk image format supported by KubeVirt is mostly QCOW2. Packaging a QCOW2 image is a relatively slow and complicated process. It often takes tens of minutes to build a QCOW2 image and is a quite different process compared with the building of container images.

Due to its non-negligible overhead and time-consuming disk image building process, KubeVirt is a relatively heavy virtualization solution and is more suitable for traditional virtualization workloads.

Virtink is Designed for Modern Cloud-based Virtualization Workloads

We have found that virtualization is mostly used on Kubernetes to run modern operating systems and server-side workloads. These workloads neither need desktop consoles like VNC, nor rely on legacy hardware emulations. However, they demand lower virtualization overhead, higher security and faster boot speed. And that is where Virtink comes in.

Virtink, which is short for “Virtualization in Kubernetes”, has following design goals:

  1. Be totally Kubernetes-native and deployable on canonical Kubernetes clusters. It can be used, managed and upgraded via Kubernetes API.
  2. Use Cloud Hypervisor as the underlying virtual machine monitor (VMM) to support cloud workloads only. Reduce management overhead as much as possible and keep code simple stupid.
  3. Stay with Kubernetes network and storage ecosystem as much as possible. Most CNI and CSI plugins should work with Virtink without further changes.
  4. Be a perfect choice to run fully isolated Kubernetes clusters in a host Kubernetes cluster.
  5. Provide a way to build VM images as easy and fast as docker build.
  6. Run on x86 and ARM machines.

How We Made Virtink Lightweight

Using Lightweight Cloud Hypervisor

Cloud Hypervisor is an open-source KVM Virtual Machine Monitor written in Rust. It mainly focuses on supporting modern cloud workloads. Its main advantages include security and lightweight. As Cloud Hypervisor provides the minimum hardware emulation, it reduces the memory overhead and minimizes the attack surface.

No Additional Process Overhead

Virtink does not need to run the libvirtd or launcher processes for each VM, which eliminates any overhead other than the VM itself.

Before launching the VM, Virtink initiates a “prerunner” process to setup network and build virtual machine configuration. As the process will exit before the VM is launched, there is no long-term memory or CPU overheads. The monitoring of the virtual machine status is handled by the daemon process running on each node. It will monitor the running status of the virtual machine via Cloud Hypervisor API and issue virtual machine management commands when necessary.

Supporting the Use of Container Image as Virtual Machine rootfs

Cloud Hypervisor supports direct kernel boot. By specifying both a kernel and rootfs image, Virtink can launch the virtual machine right away. The rapid boot speed of the virtual machine is not the only advantage here. As no bootloader or UEFI partitions are required, building a rootfs can be much easier than building a bootable disk image.

Built on the direct kernel boot feature of Cloud Hypervisor, Virtink can take a container image as a rootfs and boot a virtual machine with it. It means that the Docker toolchain can be fully utilized to build a virtual machine image. This accelerates the building and publishing of the virtual machine remarkably. Shown below is an example of a Dockerfile used to build an Ubuntu rootfs.

A Demo

(https://asciinema.org/a/509484)

Nicely Support Kubernetes in Kubernetes

Currently, most public cloud platforms provide direct support for Kubernetes so that it can be very easy and simple to create, operate and maintain Kubernetes clusters on them. However, considerable efforts and budgets are required to achieve similar capabilities in private data centers. A virtualization platform that is both capable and Kubernetes-friendly is crucial to achieve the easy creation and operation of dozens of Kubernetes clusters.

Virtink provides a more simple and lightweight solution to this problem. Virtink is well capable of hosting virtual Kubernetes clusters. Moreover, due to the extremely low overhead of Virtink virtual machines, more virtual Kubernetes nodes can be created on the same hardware compared to KubeVirt.

In addition, we have also developed knest, a command line tool that can help create nested Kubernetes clusters on an existing host Kubernetes cluster. With this tool, it is possible to create any number of nested Kubernetes clusters rapidly to achieve Kubernetes-as-a-service.

(https://asciinema.org/a/509497)

Roadmap

Now in its v0.8 version, Virtink provides features like CNI network and CSI storage integration and can run on both x86 and ARM machines. Features like live migration, PCI device passthrough (SR-IOV NICs, GPU, etc.), vCPU binding and virtual disk hot-plugging are all on the roadmap.

The current version of knest is v0.2, which supports the creation, expansion and scaling of the nested Kubernetes clusters. Future versions of knest will keep improving in both usability and cluster operational functions.

Both Virtink and knest are hosted on GitHub, licensed under Apache License 2.0. Feel free to give us any feedback on the project’s issue board. PRs are also highly welcomed.

Continue Reading