What Is KubeVirt? Bringing Virtual Machines to Kubernetes

Written By:
Founder & CTO
June 20, 2025

KubeVirt is a Kubernetes-native virtualization solution that allows developers and operators to run and manage virtual machines (VMs) side by side with containerized workloads on a shared Kubernetes infrastructure. It bridges the traditional world of virtual machines with the modern, fast-moving world of Kubernetes, effectively transforming Kubernetes into a multi-workload platform that can run both containerized applications and legacy virtual machines simultaneously.

By extending Kubernetes through Custom Resource Definitions (CRDs), KubeVirt allows developers to use the same Kubernetes API and tooling they're already familiar with to define, schedule, manage, and scale VMs. This makes it significantly easier to integrate VM-based applications into modern DevOps workflows without abandoning existing legacy workloads or requiring separate infrastructure for VMs and containers.

In an era where hybrid cloud, edge computing, and cloud-native infrastructure are becoming standard across industries, KubeVirt offers a powerful solution to unify and modernize application deployment and management without compromise.

Why Developers Should Care About KubeVirt
Unified Developer Experience: One API for All Workloads

KubeVirt empowers developers by bringing virtual machines into the Kubernetes control plane, allowing them to work with both VMs and containers using a single declarative interface. Instead of managing a virtual machine through a traditional hypervisor like VMware vSphere, developers can define VM specs in YAML and apply them via kubectl, just like they would for a pod, deployment, or service.

For example, developers can use GitOps tools like ArgoCD or Flux to deploy both containerized applications and virtual machines in a fully automated, version-controlled manner. This unification significantly reduces complexity and makes it easier to adopt infrastructure-as-code practices even for VM-based applications. Whether you're deploying a stateless microservice or a full Linux VM running Oracle DB, the operational model remains the same.

This also eliminates the need for context switching. Developers no longer have to learn new APIs or tools just to manage virtualized workloads. Everything happens through Kubernetes APIs and CLI tools like kubectl and virtctl, giving developers a consistent workflow and increasing productivity.

Incremental Modernization: Coexist with Legacy Without Rewriting

One of the biggest challenges companies face in modernization is dealing with legacy applications that cannot be easily containerized. These might be large, stateful monoliths built in older programming languages, or apps with OS-level dependencies that are incompatible with container runtimes.

KubeVirt allows these legacy applications to run in VMs within the same Kubernetes cluster that hosts your microservices. This means organizations can move to Kubernetes without having to rewrite or refactor all their existing applications. It supports a phased modernization strategy, enabling teams to move gradually rather than adopting a risky all-or-nothing approach.

For example, you might run a legacy VM that hosts a proprietary enterprise ERP system and expose it to services running in containers via Kubernetes Services or Ingress. Over time, you can containerize parts of that system, replacing or wrapping legacy components until you're ready to decommission the VM entirely.

This hybrid approach ensures business continuity while enabling digital transformation.

Live Migration and High Availability in Kubernetes

KubeVirt brings powerful enterprise-grade VM features like live migration, high availability, and node evacuation into the Kubernetes world. These features are critical for production environments where uptime is a key business requirement.

Live migration allows a running VM to move from one node to another without shutting down, minimizing application downtime during maintenance or when responding to infrastructure failures. This is particularly useful in Kubernetes clusters running in hybrid cloud setups, where nodes might need to be rebooted, drained, or upgraded.

Combined with Kubernetes scheduling, taints/tolerations, and readiness probes, KubeVirt supports automatic failover of VM workloads to healthy nodes, enabling self-healing VM-based applications. For example, if a physical node hosting a VM fails, Kubernetes can automatically reschedule that VM to another node in the cluster, just like it does with pods.

This feature gives Kubernetes-native systems VM-level fault tolerance, making it an attractive platform for mission-critical workloads.

Optimized Resource Utilization and Scheduling

One of the often-cited benefits of Kubernetes is its ability to maximize hardware usage through intelligent scheduling and resource allocation. KubeVirt brings these same benefits to VMs, enabling smarter scheduling decisions based on CPU, memory, and NUMA topology.

Unlike traditional virtualization platforms, which often rely on statically sized VMs that waste idle resources, KubeVirt can run VMs in Kubernetes pods alongside containers, allowing for dynamic resource allocation. It also supports features like HugePages, CPU pinning, and dedicated CPU cores, giving performance-critical VMs the hardware-level isolation they need.

This reduces overprovisioning and lets you densify your workloads, running more applications on the same hardware while still meeting performance SLAs. Developers can benefit from a more predictable environment that scales horizontally as needed, without needing to manually allocate or deallocate VMs.

Performance and Security with QEMU + KVM

Under the hood, KubeVirt uses QEMU and KVM (Kernel-based Virtual Machine) to run virtual machines. These technologies are battle-tested and widely adopted in the industry, providing near-native performance and robust isolation.

KubeVirt integrates these with Kubernetes to run VMs in specialized pods, allowing for tight control over CPU, memory, and device access. This combination ensures high-performance execution and provides strong security boundaries, which is especially important for multi-tenant clusters and regulated industries.

From a security perspective, KubeVirt benefits from Kubernetes-native capabilities such as RBAC, PodSecurityPolicies, AppArmor/SELinux, and network policies, all of which can be applied to VMs. This enables developers and platform teams to enforce consistent security policies across all workloads, regardless of whether they’re running as containers or virtual machines.

How KubeVirt Works: The Developer’s Perspective
Installation and Architecture Overview

KubeVirt is installed into a Kubernetes cluster using an Operator. This operator handles the deployment of several critical components:

  • virt-api: Validates VM-related resources using Kubernetes’ API extensions.

  • virt-controller: Schedules and manages virtual machines by creating the associated pods and monitoring VM lifecycle events.

  • virt-handler: Runs on each node and manages the lifecycle of VMs on that node.

  • libvirt and qemu-kvm: Run inside specialized pods that host the virtual machines.

Once installed, developers can create virtual machines using Kubernetes Custom Resource Definitions (CRDs). A simple YAML file defines the virtual machine and its spec, just like a pod or a deployment. For example:

apiVersion: kubevirt.io/v1

kind: VirtualMachine

metadata:

  name: ubuntu-vm

spec:

  running: true

  template:

    spec:

      domain:

        devices:

          disks:

            - disk:

                bus: virtio

              name: containerdisk

        resources:

          requests:

            memory: 2Gi

      volumes:

        - name: containerdisk

          containerDisk:

            image: kubevirt/cirros-container-disk-demo

Developers can then start, stop, or migrate the VM using the virtctl CLI tool, which acts as an extension of kubectl for managing virtual machines. This makes it extremely easy to integrate VM lifecycle operations into existing CI/CD pipelines and automation scripts.

Real-World Developer Use Cases
Seamless Dev/Test Environments

With KubeVirt, developers can spin up full multi-VM environments alongside microservices locally using Minikube or Kind. This is extremely useful for developing and testing distributed systems that rely on combinations of VMs and containers.

For example, a developer building a hybrid application involving a Windows-based application server (running in a VM) and a modern frontend (running as a container) can now run both locally in a single cluster, test interactions, and ship with confidence.

Hybrid Cloud and Edge Computing

KubeVirt fits naturally into hybrid cloud and edge computing scenarios, where some applications might need to run close to the user on edge nodes in a VM, while others can run centrally in the cloud as containers. Developers can build apps that intelligently split functionality across compute platforms without having to switch orchestration tools or deployment pipelines.

GPU-Accelerated Workloads and AI/ML Pipelines

KubeVirt supports GPU passthrough, making it ideal for AI/ML workloads that depend on CUDA or other hardware acceleration libraries not well-supported in containers. Developers can run legacy training workloads inside VMs while orchestrating data ingestion, pre/post-processing, and model serving using containers, all in the same Kubernetes environment.

Advantages Over Traditional Virtualization

KubeVirt provides several significant advantages over traditional hypervisors and virtualization platforms:

  • Unified Infrastructure: Developers no longer need separate infrastructure or APIs for VMs and containers.

  • GitOps Friendly: Manage VM definitions in Git, track changes, and deploy declaratively.

  • Lower Overhead: No licensing costs (unlike VMware); leaner operational model.

  • Built-in High Availability: Automatically migrate or reschedule VMs during node failures.

  • Cloud-Native Integration: Seamlessly integrates with Kubernetes-native storage (like Ceph, Longhorn), networking (like Calico, Cilium), and observability tools (Prometheus, Grafana).

Getting Started with KubeVirt
Prerequisites
  • Kubernetes cluster (v1.20+)

  • 2+ worker nodes (for migration testing)

  • kubectl and virtctl installed

Installation Steps
  1. Install the KubeVirt operator:

kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/vX.Y.Z/kubevirt-operator.yaml

  1. Deploy the KubeVirt CRDs:

kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/vX.Y.Z/kubevirt-cr.yaml

  1. Install virtctl CLI and create your first virtual machine YAML.

  2. Start the VM using:

virtctl start ubuntu-vm

  1. Access the VM using serial console:

virtctl console ubuntu-vm

Final Thoughts: Future-Proof Infrastructure for Developers

KubeVirt redefines how developers interact with infrastructure. By embedding virtual machines into the Kubernetes ecosystem, it allows teams to move fast without leaving legacy workloads behind. It gives developers one consistent platform to deploy, scale, observe, and secure both containers and VMs using familiar tooling and practices.

For any development team looking to modernize without disruption, unify tooling, or embrace hybrid and edge computing, KubeVirt is not just a stopgap, it's a strategic enabler.

Connect with Us