SUSE Harvester

Open-Source Hyper-Converged Infrastructure built on Kubernetes

01

Overview

SUSE Harvester (officially rebranded as SUSE Virtualization starting with v1.4) is an open-source hyper-converged infrastructure (HCI) solution purpose-built for running virtual machines on Kubernetes. It combines KubeVirt for VM management, Longhorn for distributed storage, and an embedded RKE2 Kubernetes cluster into a single, turnkey platform. The latest release is v1.7 (January 2026), which adds ARM64 support and upgrades the base OS to SL Micro 6.1.

Harvester is positioned as a modern, cloud-native alternative to legacy virtualization platforms such as VMware vSphere and Proxmox VE. It is fully free and open source under the Apache 2.0 license, with optional commercial support from SUSE.

Core What It Does

  • Runs production VMs on bare-metal servers using KubeVirt
  • Provides distributed storage via Longhorn
  • Manages networking with Multus and bridge CNI
  • Offers a web-based UI for VM lifecycle management

Why Key Differentiators

  • No per-core or per-socket licensing fees
  • Kubernetes-native — VMs are CRDs managed by kubectl
  • First-class integration with SUSE Rancher
  • Unified infrastructure for VMs and containers
Key Concept

Harvester treats VMs as first-class Kubernetes resources. A virtual machine is a VirtualMachine custom resource managed by KubeVirt, which means you can manage your entire VM fleet with GitOps, Helm charts, or standard Kubernetes tooling.

02

Architecture

Harvester is installed as a bare-metal operating system image. Each node boots a customized SLE Micro-based OS (SL Micro 6.1 as of v1.7) with an embedded RKE2 Kubernetes cluster. All Harvester services run as Kubernetes workloads on top of this foundation.

┌──────────────────────────────────────────────────────────┐ │ Harvester UI / API │ ├──────────────────────────────────────────────────────────┤ │ KubeVirt │ Longhorn │ Harvester │ │ (VM Management) │ (Storage) │ Network │ │ │ │ Controller │ ├──────────────────────────────────────────────────────────┤ │ Embedded RKE2 (Kubernetes) │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ │ etcd │ │ Canal │ │ Multus │ │ │ └─────────┘ └─────────┘ └─────────┘ │ ├──────────────────────────────────────────────────────────┤ │ Harvester OS (SLE Micro-based) │ ├──────────────────────────────────────────────────────────┤ │ Bare-Metal Hardware │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ Node 1 │ │ Node 2 │ │ Node 3 │ │ │ │ CPU/RAM │ │ CPU/RAM │ │ CPU/RAM │ │ │ │ NVMe/SSD│ │ NVMe/SSD│ │ NVMe/SSD│ │ │ └──────────┘ └──────────┘ └──────────┘ │ └──────────────────────────────────────────────────────────┘

Layer Breakdown

OS Harvester OS

An immutable, minimal Linux distribution based on SLE Micro. Designed for unattended operation with transactional updates and read-only root filesystem.

K8s Embedded RKE2

A hardened, FIPS-compliant Kubernetes distribution (RKE2) ships embedded. Users never install or manage Kubernetes separately — it is part of the Harvester appliance.

Compute KubeVirt

KubeVirt extends Kubernetes with VM management capabilities. It uses libvirt and QEMU/KVM under the hood to run full virtual machines as pods.

Storage Longhorn

Longhorn provides replicated, distributed block storage. VM disks are Longhorn volumes with configurable replica counts for data redundancy.

Production Recommendation

Deploy a minimum of 3 nodes for production. This ensures etcd quorum, Longhorn storage replica distribution, and VM live migration capabilities during maintenance windows.

03

VM Management

Harvester provides a full VM lifecycle management experience through its web UI, API, and standard Kubernetes tooling. Virtual machines are represented as VirtualMachine custom resources managed by KubeVirt.

Creating Virtual Machines

VMs can be created from the Harvester UI, via kubectl, or through the Rancher UI. Each VM is defined by a VirtualMachine CRD that specifies CPU, memory, disks, networks, and cloud-init configuration.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: ubuntu-vm
  namespace: default
spec:
  running: true
  template:
    spec:
      domain:
        cpu:
          cores: 4
        memory:
          guest: 8Gi
        devices:
          disks:
            - name: rootdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
      networks:
        - name: default
          pod: {}
      volumes:
        - name: rootdisk
          persistentVolumeClaim:
            claimName: ubuntu-root-pvc
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |
              #cloud-config
              password: changeme
              chpasswd: { expire: false }
              ssh_pwauth: true

VM Templates

Harvester supports VM templates that pre-define resource configurations, disk images, and cloud-init scripts. Templates can be versioned and shared across namespaces for standardized VM provisioning.

Key VM Operations

Operations Lifecycle

  • Start / Stop / Restart — Standard power operations
  • Pause / Unpause — Freeze VM execution in-place
  • Console access — VNC and serial console via UI
  • Edit — Hot-plug CPU/memory and NICs (v1.7+) where supported

Data Protection Snapshots & Backups

  • VM Snapshots — Point-in-time snapshots of VM state + disks
  • VM Backups — Export to S3 or NFS backup targets
  • Restore — Create new VM from snapshot or backup
  • Clone — Full VM cloning from existing VMs

Live Migration

Live migration allows running VMs to move between Harvester nodes with zero downtime. This is essential for node maintenance, rolling upgrades, and workload rebalancing. Harvester triggers live migration automatically during upgrade operations.

Limitation

VMs using PCI passthrough (GPU, SR-IOV) or hostPath volumes cannot be live migrated. Plan maintenance windows accordingly for these workloads.

04

Storage

Harvester uses Longhorn as its integrated distributed block storage engine. Every VM disk is backed by a Longhorn volume by default, providing replication, snapshots, and backup capabilities out of the box. Since v1.4, Harvester also supports third-party CSI drivers for VM data disks, enabling integration with external storage solutions for specific performance or infrastructure needs (note: VM backup currently works only with the Longhorn v1 data engine).

Storage Classes

Harvester ships with a default longhorn storage class. Custom storage classes can be created to control replica count, data locality, and reclaim policies.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: harvester-longhorn-high-perf
provisioner: driver.longhorn.io
parameters:
  numberOfReplicas: "3"
  dataLocality: "best-effort"
  diskSelector: "nvme"
reclaimPolicy: Delete
volumeBindingMode: Immediate

Volume Management

Volumes Types

  • Image volumes — Imported from ISO/qcow2 images in the image library
  • Blank volumes — Empty volumes for data disks
  • Cloned volumes — Created from existing volume snapshots
  • Uploaded volumes — Directly uploaded via UI or API

Performance Tuning

  • Use NVMe/SSD disks for Longhorn storage nodes
  • Set dataLocality: best-effort to prefer local reads
  • Separate OS disk from Longhorn data disks
  • Consider the Longhorn V2 Data Engine (SPDK-based) for NVMe disks requiring lowest latency
  • Monitor IOPS per volume via Longhorn dashboard

Backup Targets

Longhorn supports exporting volume backups to external targets for disaster recovery. Harvester supports two backup target types:

  • S3-compatible storage — AWS S3, MinIO, or any S3-compatible endpoint
  • NFS — Network file system mount for on-premises backup
Best Practice

Configure a backup target before deploying production VMs. Longhorn volume backups are incremental after the first full backup, minimizing storage and network usage for ongoing protection.

05

Networking

Harvester networking is powered by Multus and the bridge CNI plugin, enabling VMs to attach to multiple networks including VLAN-tagged physical networks. Canal (Calico + Flannel) provides the default cluster network for Kubernetes pods.

Network Types

Default Management Network

The management network uses the Kubernetes pod network (masquerade mode). VMs get cluster-internal IPs and can reach external networks via NAT. Suitable for simple deployments and testing.

Production VLAN Networks

VLAN networks connect VMs directly to physical network segments via Multus and bridge CNI. VMs receive IPs from external DHCP or static assignment, appearing as native hosts on the VLAN.

Creating a VLAN Network

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: vlan100
  namespace: default
  annotations:
    network.harvesterhci.io/route: '{"mode":"auto"}'
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "name": "vlan100",
      "type": "bridge",
      "bridge": "mgmt-br",
      "vlan": 100,
      "ipam": {}
    }

Network Isolation

Network isolation is achieved through VLAN segmentation at the physical switch level. Different VM workloads can be placed on separate VLANs to enforce tenant or security-zone boundaries. Harvester supports multiple VLAN networks per VM for multi-homed configurations.

Load Balancer Integration

For Kubernetes clusters running on Harvester VMs, the Harvester Cloud Provider can provision load balancers backed by Harvester's DHCP-allocated IPs or a configured IP pool, enabling LoadBalancer-type services without an external LB appliance.

Network Planning

Ensure your physical switches have the required VLANs trunked to Harvester node ports. Each Harvester node needs at least one NIC for the management network. For production, use bonded NICs for redundancy and dedicate separate interfaces for VLAN traffic and storage replication.

06

Rancher Integration

Harvester integrates deeply with SUSE Rancher, providing a unified management plane for both virtualization infrastructure and Kubernetes clusters. Rancher can import Harvester clusters and use them as infrastructure providers for downstream K8s clusters.

Integration Capabilities

Management Harvester in Rancher

  • Import Harvester cluster into Rancher for centralized management
  • Manage VMs, images, and networks from the Rancher UI
  • Multi-cluster management across multiple Harvester clusters
  • RBAC and authentication integration via Rancher

Provisioning K8s on Harvester

  • Harvester Node Driver — Provision RKE2/K3s clusters on Harvester VMs
  • Automated VM creation for K8s control plane and workers
  • Cluster autoscaling backed by Harvester VM provisioning
  • Consistent infrastructure-as-code workflow

Cloud Provider Harvester Cloud Provider

The Harvester Cloud Provider is a Kubernetes cloud controller manager (CCM) that runs inside guest K8s clusters provisioned on Harvester. It provides:

  • Load Balancer — Automatic provisioning of L4 load balancers for Service type LoadBalancer
  • Node metadata — Populates node topology labels and zone information
  • CSI integration — Harvester CSI driver allows guest clusters to provision persistent volumes backed by Harvester/Longhorn storage
Workflow

The typical production workflow is: Install Harvester on bare metal → Import into RancherProvision RKE2 clusters using the Harvester node driver → Deploy workloads on guest clusters with Harvester cloud provider for LB and storage.

07

Backup & DR

Harvester provides VM-level backup and restore capabilities, plus the ability to back up the Harvester cluster configuration itself for disaster recovery scenarios.

VM Backups

VM backups capture the complete state of a virtual machine, including all attached volumes and metadata. Backups are stored on a pre-configured backup target (S3 or NFS) and can be used to restore VMs on the same or a different Harvester cluster.

  • On-demand backups — Triggered manually from the UI or API
  • Scheduled backups — Cron-based backup schedules per VM
  • Incremental — After the initial full backup, subsequent backups are incremental

Restore Procedures

In-Place Same Cluster

  • Restore VM from backup to the same Harvester cluster
  • Option to replace existing VM or create new
  • Restore specific volumes from a backup

DR Cross-Cluster

  • Point a second Harvester cluster to the same backup target
  • Restore VMs from backups created on the primary cluster
  • Use for site-level disaster recovery

Harvester Cluster Backup

Beyond VM-level backups, administrators should back up the Harvester cluster configuration including etcd snapshots, Kubernetes resource definitions, and Harvester settings. RKE2 provides automatic etcd snapshots that can be configured for external S3 storage.

Critical

Always test your restore procedure before relying on it for DR. Verify that backup targets are accessible from your DR site and that VM restore produces bootable, functional VMs with correct network configurations.

08

Upgrades

Harvester supports rolling upgrades that update the entire stack — OS, Kubernetes, KubeVirt, Longhorn, and Harvester components — with minimal disruption to running VMs.

Upgrade Process

  1. Pre-flight checks — Harvester validates cluster health, Longhorn volume replicas, and available disk space before starting
  2. Download upgrade image — New version ISO is uploaded or pulled from a configured repository
  3. Node-by-node rolling upgrade — Each node is cordoned, VMs are live migrated to other nodes, the node is upgraded, then uncordoned
  4. Post-upgrade validation — Cluster health checks confirm all components are running the new version

Zero-Downtime Requirements

Prerequisites For Zero-Downtime Upgrades

  • Minimum 3 nodes — Must have capacity to absorb VMs during node drain
  • All VMs must be live-migratable (no PCI passthrough, no hostPath volumes)
  • Longhorn volumes must have healthy replicas on multiple nodes
  • Sufficient CPU/RAM headroom on remaining nodes during the upgrade window
  • Back up all VMs before starting an upgrade

Upgrade Troubleshooting

  • Stuck upgrades — Check the Upgrade custom resource status and Harvester controller logs
  • Failed VM migration — Identify non-migratable VMs and shut them down manually before retrying
  • Longhorn volume degraded — Wait for volume replica rebuilds to complete before proceeding to the next node
  • Rollback — Harvester does not support in-place rollback; restore from a pre-upgrade etcd snapshot if necessary
Warning

Skipping major versions is not supported. Upgrade sequentially through each minor version. Always read the release notes for breaking changes and required manual steps before starting an upgrade.

09

Comparison

How Harvester compares to other popular virtualization platforms:

Feature Harvester Proxmox VE VMware vSphere
License Apache 2.0 (fully open source) AGPLv3 (open source, paid support) Proprietary (per-core subscription since Broadcom acquisition)
Cost Free; optional SUSE support Free; optional subscription $$$ per-core subscription (72-core minimum), no more perpetual licenses
Hypervisor KVM via KubeVirt (Type 1 on bare metal) KVM + LXC (Type 1) ESXi (Type 1)
Storage Longhorn (distributed block) + third-party CSI ZFS, Ceph, LVM, local vSAN, NFS, iSCSI, FC
Networking Multus + Bridge CNI, VLANs Linux bridge, OVS, VLANs vDS, NSX-T, VLANs
K8s Integration Native (is Kubernetes) None built-in vSphere Kubernetes Service (bundled in VCF/VVF)
Container Support Native (Kubernetes pods + VMs) LXC containers vSphere Kubernetes Service (Kubernetes add-on)
Maturity GA since 2021 (v1.0), growing ecosystem Mature (15+ years) Industry standard (20+ years)
Management UI Harvester UI + Rancher Proxmox web UI vCenter Server
Live Migration Yes (KubeVirt) Yes Yes (vMotion)
Positioning

Harvester is best suited for organizations that want to converge VM and container workloads on a single Kubernetes-based platform, especially those already invested in the SUSE/Rancher ecosystem. For pure VM workloads with no Kubernetes requirement, Proxmox or vSphere may offer more mature tooling.

10

Licensing

Harvester is released under the Apache License 2.0, making it fully free and open source with no feature restrictions, no per-socket fees, and no node limits.

Free Community Edition

  • Full feature set with no artificial limitations
  • Community support via GitHub issues and Slack
  • All Harvester components are Apache 2.0 or similarly licensed
  • No phone-home, telemetry, or license key requirements

Paid SUSE Support

  • Commercial support subscriptions available from SUSE
  • 24/7 production support with SLA guarantees
  • Access to SUSE support engineers and knowledge base
  • Bundled support with Rancher Prime subscriptions
Key Advantage

Unlike VMware vSphere (which now requires per-core subscriptions with a 72-core minimum since the Broadcom acquisition), there are no per-core or per-socket licensing fees. A 100-node Harvester cluster costs the same in software licensing as a 3-node cluster: zero. This makes Harvester particularly attractive for organizations facing VMware cost increases of 200–350% under Broadcom's new licensing model.

11

Consultant's Checklist

Key items to verify when planning, deploying, or auditing a Harvester environment:

Planning Pre-Deployment

  • Minimum 3 nodes for production (HA + live migration)
  • Hardware: 64-bit x86_64 (or ARM64 as of v1.7) CPUs with hardware virtualization, 8 GiB RAM minimum (32GB+ recommended for production), 16+ cores for production
  • Dedicated NVMe/SSD disks for Longhorn storage
  • Network: VLANs trunked to node ports, bonded NICs
  • IPMI/BMC access for remote management
  • DNS entries for Harvester VIP

Day 1 Deployment

  • Configure backup target (S3/NFS) immediately
  • Set up VLAN networks for production VM traffic
  • Create VM templates for standard OS images
  • Import Harvester into Rancher for centralized management
  • Configure NTP on all nodes
  • Test VM live migration across all nodes

Day 2 Operations

  • Monitor Longhorn volume health and replica counts
  • Schedule regular VM backups with retention policies
  • Test VM restore from backup quarterly
  • Plan upgrade path — never skip minor versions
  • Monitor node resource utilization for capacity planning
  • Review Harvester release notes for security patches

DR Disaster Recovery

  • Document backup target configuration
  • Verify cross-cluster restore capability
  • Maintain etcd snapshot backups externally
  • Test full cluster rebuild from scratch
  • Keep ISO images for current and rollback versions
  • Document network configuration (VLANs, IPs, bonds)