Kubernetes on Mac: UTM vs Vagrant vs OrbStack — Which One Should You Actually Use?

Kubernetes on Mac: UTM vs Vagrant vs OrbStack

The same production-grade cluster was built six different ways. Here’s what was learned, what broke, and which tool actually makes sense for different use cases.

Why most “Kubernetes on Mac” tutorials are useless

Search for “Kubernetes homelab” and you’ll find hundreds of tutorials. Almost all of them use VirtualBox (which doesn’t work properly on Apple Silicon), or they use minikube/kind (which hides the complexity you actually need to learn). A few cover UTM. Almost none cover Vagrant on M-series Macs. And nobody is comparing them side by side with the same cluster architecture.

That’s the gap this post fills. The same Kubernetes cluster — both a simple setup and a full HA architecture with Vault PKI, etcd clustering, and HAProxy — was built across three different tools on the same Apple Silicon Mac (M4). Same architecture, same Ansible automation, different virtualization layer.

The goal isn’t to declare a “winner.” It’s to help pick the right tool for the right situation — whether that’s CKA exam prep, building a dev environment, or simulating production infrastructure.

UTM, Vagrant, and OrbStack — quick overview

UTM (QEMU)

UTM is a free, open-source virtualization app for macOS that uses QEMU under the hood and supports Apple’s Virtualization framework. It creates full virtual machines with separate kernels and their own network stacks. Out of the box, UTM provides a GUI for managing VMs, but it also exposes utmctl — a command-line tool for scripted VM creation and lifecycle management. Combined with cloud-init ISO images for automated provisioning, UTM VMs can be created entirely from the terminal. The VMs boot Ubuntu 24.04 ARM64 cloud images and connect via UTM’s shared network on the 192.168.64.0/24 subnet. Available at utm.app.

Vagrant (+ QEMU provider & socket_vmnet)

Vagrant abstracts VM provisioning into a declarative Vagrantfile — define what machines are needed and Vagrant creates them. On Apple Silicon, VirtualBox isn’t an option. Instead, the vagrant-qemu plugin provides a QEMU-based provider that runs natively on ARM64. The challenge is networking: by default, QEMU on macOS can’t give VMs routable IPs accessible from the host. That’s where socket_vmnet comes in — a lightweight daemon (from the Lima project) that provides vmnet.framework support for QEMU. It runs as root but allows the QEMU process itself to stay unprivileged. With socket_vmnet, each Vagrant VM gets a real IP on the 192.168.105.0/24 subnet, accessible from the host and from other VMs. The tradeoff: each VM ends up with two network interfaces (one NAT for internet, one vmnet for inter-VM communication), and the Vagrantfile complexity is higher than expected.

OrbStack

OrbStack is a fast, lightweight alternative to Docker Desktop for macOS that also provides Linux virtual machines. Unlike UTM and Vagrant, OrbStack’s Linux machines share the host kernel — they’re not full VMs with separate kernels but rather lightweight environments running on top of macOS’s virtualization layer. The result is dramatically lower resource consumption: each machine uses only 1.3–3.0 GB of disk compared to the 20–40 GB that UTM and Vagrant VMs allocate. Creating a machine is as simple as orb create ubuntu noble machine-name and it appears in seconds. OrbStack has built-in single-node Kubernetes, but for these labs, the Linux machines feature is used to build multi-node clusters from scratch. Free for personal use. VMs connect on the 192.168.139.0/24 subnet.

Even the “simple” setup isn’t simple

Before diving into comparisons, one thing worth noting: the “simple” setup across all three tools isn’t a typical tutorial’s single-master-with-kubeadm approach. Every setup — simple and HA — includes a HashiCorp Vault server for PKI certificate management, a dedicated jump/bastion server, and a separate etcd node. Even the simple cluster runs 6 VMs: vault, jump, etcd-1, master-1, worker-1, and worker-2. The “simple” refers to a single control plane node (no HA), not a stripped-down architecture.

The HA setup scales to 11 VMs: everything in the simple setup, plus HAProxy for API server load balancing, two additional etcd nodes (3-node cluster), a second master, and a third worker. Kubernetes is installed the hard way — from raw binaries, no kubeadm. All TLS certificates are issued by Vault using a 3-tier CA hierarchy with separate CAs for Kubernetes, etcd, and the front proxy.

All six repos share the same Ansible automation structure and component versions: Kubernetes v1.32.0, etcd 3.5.12, containerd 1.7.24, and Calico CNI 3.28.0. The only thing that changes between them is the virtualization layer.

Simple cluster: 1 master + 2 workers (6 VMs)

Setup experience

UTM’s setup is automated through utmctl and cloud-init. A shell script creates each VM from an Ubuntu cloud image, attaches a cloud-init ISO for network configuration and SSH key injection, and boots it. It’s straightforward once the automation is in place, but building that automation (crafting cloud-init configs, generating ISOs, scripting utmctl commands) takes real effort upfront. The payoff: Ubuntu 24.04 images boot surprisingly fast on UTM.

Vagrant’s approach is declarative — everything lives in a Vagrantfile. Running vagrant up --provider=qemu spins up all VMs. But the initial setup has friction: installing the vagrant-qemu plugin, configuring socket_vmnet (which requires root), and dealing with the dual-network-interface design (NAT + vmnet per VM). The Vagrantfile itself is more complex than expected due to QEMU provider-specific configuration. Once set up, the day-to-day workflow is clean: vagrant up to create, vagrant destroy to tear down.

OrbStack is the easiest by far. orb create ubuntu noble vm-name and the machine exists. No ISO images, no cloud-init configs, no provider plugins. Machines are created in seconds, not minutes. The simplicity is almost disorienting if you’re used to the ceremony of full VM tooling.

Deployment time

All three tools were timed from a cold start (no pre-existing VMs) to a fully working Kubernetes cluster with all nodes in Ready state and Calico CNI installed:

UTM Simple5m 57s
OrbStack Simple5m 59s
Vagrant Simple6m 33s

UTM and OrbStack are nearly identical for the simple setup. Vagrant is about 10% slower, with vagrant up alone taking 2m 25s. The Vagrant timing breakdown reveals where the time goes: Vault setup (39s), K8s certificate issuance (28s), etcd deployment (24s), control plane (45s), and workers (1m 5s).

Networking

Each tool uses a different network subnet: UTM puts VMs on 192.168.64.0/24 via its shared network bridge, Vagrant uses 192.168.105.0/24 through socket_vmnet’s vmnet.framework integration, and OrbStack assigns IPs on 192.168.139.0/24. All three provide static IPs and host-to-VM connectivity.

Vagrant has a notable quirk: each VM gets two network interfaces (one NAT, one vmnet). This means the Vagrantfile and Ansible inventory need to be explicit about which interface to use for cluster communication. UTM and OrbStack each provide a single interface per VM, which is cleaner and closer to how production cloud VMs behave.

Resource consumption

OrbStack is dramatically lighter on disk. The simple setup’s 6 machines use roughly 10.6 GB total (individual VMs range from 1.3 GB to 3.0 GB). UTM and Vagrant both use QEMU under the hood and allocate full virtual disks — 20–40 GB per VM. For RAM, UTM and Vagrant pre-allocate memory to each VM. OrbStack shares the host kernel and uses memory more dynamically. The practical difference: a laptop stays noticeably cooler running OrbStack compared to UTM.

HA cluster: 11 VMs, the hard way

This is where things get serious. No kubeadm. Kubernetes installed from raw binaries. A HashiCorp Vault server with a 3-tier PKI CA hierarchy issues all TLS certificates. A 3-node etcd cluster with mutual TLS. Two control plane masters behind an HAProxy load balancer. Three workers. A jump/bastion server. 11 VMs total, fully automated with Ansible.

Deployment time

UTM HA6m 13s
OrbStack HA7m 26s
Vagrant HA8m 10s

UTM is the fastest for HA — 6m 13s for a full 11-VM production-grade cluster. That’s surprisingly close to the simple setup’s time. The reason: UTM’s file copy operations between host and VMs are the fastest of all three tools, which matters when distributing binaries and certificates across 11 nodes.

OrbStack comes in at 7m 26s — about a minute slower than UTM despite being nearly tied in the simple setup. The bottleneck is file copy speed: OrbStack is the slowest of the three at transferring files, and when multiplied across 11 machines, it adds up.

Vagrant is slowest at 8m 10s. The timing breakdown: vagrant up takes 1m 42s, Vault setup (42s), K8s certs (36s), etcd + HAProxy (36s), control plane (2m 6s), workers (1m 35s), Calico CNI (31s).

Can your Mac handle 11 VMs?

The UTM HA setup allocates 38 GB of RAM across 11 VMs (2 GB for etcd nodes up to 6 GB for workers). Vagrant with QEMU is similar. Running either requires 48 GB+ total system RAM — a Mac with 32 GB or less will struggle.

OrbStack is different. Memory isn’t pre-allocated per VM. The 11 machines collectively use a fraction of what UTM/Vagrant require. The laptop stays noticeably cooler — running the full HA cluster on OrbStack doesn’t produce the heat that UTM does. For anyone on a 16 GB Mac, OrbStack is realistically the only option for HA.

Automation differences

All three use the same Ansible playbooks for Kubernetes deployment. The difference is VM creation and Ansible connectivity.

UTM automates via utmctl + cloud-init ISOs. A script creates 11 VMs, waits for boot, configures SSH ProxyJump through the bastion. All Ansible runs from the jump server.

Vagrant uses a multi-machine Vagrantfile with the QEMU provider, defining 11 VMs with dual interfaces (NAT + vmnet via socket_vmnet). vagrant up creates everything, then Ansible deploys K8s.

OrbStack uses orb create commands in a shell script. No ISOs, no Vagrantfile, no cloud-init. Machines appear in seconds. The script is the shortest and most readable of the three.

An interesting point: both UTM and Vagrant use QEMU under the hood, but the experience is quite different. UTM gives direct control via utmctl and cloud-init, while Vagrant adds an abstraction layer that simplifies lifecycle management at the cost of Vagrantfile complexity.

Production realism

UTM and Vagrant create full VMs with their own kernels (6.8.0-106-generic for UTM, 6.8.0-63-generic for Vagrant). This is closest to production: genuine isolation, independent kernel modules, network policies that behave like bare metal.

OrbStack machines share the host kernel (6.17.8-orbstack). For most K8s learning — deployments, services, RBAC, helm, monitoring — this is indistinguishable from a full VM. Edge cases like custom kernel modules or kernel-level security policies are where differences surface. For 95% of use cases, OrbStack is sufficient.

Things that broke (so you don’t have to find out yourself)

UTM: “UTM quit unexpectedly”

When running the destroy script to tear down all VMs, UTM crashes and macOS shows the “UTM quit unexpectedly” dialog with Reopen, Ignore, and Report options. Not a dealbreaker — clicking Reopen brings it back — but annoying and breaks clean automation flow. A known quirk when programmatically deleting multiple VMs in rapid succession through utmctl.

Vagrant: dual network interfaces

Every Vagrant VM gets two NICs: NAT (internet) and vmnet (inter-VM via socket_vmnet). Kubernetes, etcd, and Ansible all need the correct interface configured. Wrong interface = nodes join the cluster but can’t communicate. The Vagrantfile has to explicitly handle this — complexity that doesn’t exist with UTM or OrbStack.

OrbStack: slow file copy

File transfer to OrbStack machines is noticeably slower than UTM or Vagrant. Distributing K8s binaries, etcd binaries, and TLS certificates across 11 VMs adds up. Primary reason OrbStack HA (7m 26s) is over a minute slower than UTM HA (6m 13s) despite machines starting almost instantly.

Calico pods: don’t panic

Across all six setups, deployment output often shows Calico pods in ContainerCreating, Init:2/3, or Init:ErrImagePull when the script finishes. This is normal — Calico needs a minute or two to fully initialize. Running kubectl get pods -A shortly after consistently shows everything Running.

Which tool should you actually use?

There’s no single best tool. The right choice depends on the situation:

Choose UTM if…

…maximum production realism is the goal. Full VM isolation with separate kernels, fastest deployment times (especially HA), fastest file copy. The “hardcode sysadmin” option. Choose when learning cloud-init matters, kernel isolation is needed, or wanting the raw VM experience. Requires 32 GB+ RAM for HA. Expect higher energy consumption and the occasional crash during teardown.

Choose Vagrant if…

…declarative, version-controlled infrastructure is valued above all else. Everything in a Vagrantfile, committable to Git. vagrant up creates, vagrant destroy tears down. QEMU provider with socket_vmnet gives same isolation as UTM. Tradeoff: Vagrantfile complexity (dual NICs, QEMU quirks) and slower deployment. Choose when clean reproducibility and infrastructure-as-code matter.

Choose OrbStack if…

…running a 16 GB Mac, or comfort and efficiency matter. Dramatically less RAM and disk, laptop stays cool with 11 machines, instant create/destroy. Shared kernel means no full VM isolation, but for learning K8s — deployments, services, RBAC, monitoring, CI/CD — more than sufficient. Slowest file copy. Choose as the daily driver, especially when resources are limited.

For CKA/CKAD exam prep…

…any of the three work. The exam tests K8s knowledge, not virtualization. Start with OrbStack Simple for speed, graduate to UTM or Vagrant HA when ready for the hard way setup and troubleshooting node failures.

Side-by-side comparison

UTMVagrantOrbStack
CostFreeFreeFree (personal)
VM typeFull VM (QEMU)Full VM (QEMU)Lightweight
Simple time5m 57s6m 33s5m 59s
HA time6m 13s8m 10s7m 26s
Network192.168.64.x192.168.105.x192.168.139.x
NICs/VM121
File copyFastestMediumSlowest
EnergyHighestHighLowest
Automationutmctl + cloud-initVagrantfile + QEMUorb create
Best forMax realismReproducibilityDaily driver

From here, go deeper

This post covered the “which tool” question. The following posts go deeper:

Building an 11-VM HA Cluster on UTM with Vault PKI

Deep dive into cloud-init provisioning, utmctl automation, and the full 17-step deployment flow.

Vagrant + QEMU + Ansible: One Command to a Production-Grade Cluster

How the Vagrantfile, QEMU provider, socket_vmnet, and Ansible roles work together on Apple Silicon.

OrbStack for Kubernetes: Maximum Learning, Minimum RAM

The best starting point for anyone with a 16GB Mac.

Vault PKI for Kubernetes: 3-Tier CA the Right Way

Why Kubernetes needs three separate CAs and how to automate them with Vault and Ansible.

Why Your Homelab K8s Cluster Isn’t Production-Ready

Single master, self-signed certs, no bastion. Everything wrong with most setups.

Get homelab configs in your inbox.

Vagrantfiles, Ansible playbooks, and K8s deep dives. No spam. Unsubscribe anytime.

Replace this section with your Beehiiv embed code

Leave a comment