HomeMigration Guides
Migration Guide

How to Migrate from VMware to Proxmox in 2026: The Complete Step-by-Step Guide

March 5, 2026 · 14 min read

I've been watching engineers struggle with this migration for the past 18 months, and the pattern is always the same: they underestimate the Windows VM work, they get blindsided by ZFS and hardware RAID, and they somehow didn't know about the "remove snapshots first" rule until they're staring at a failed import at 11pm. This guide exists to prevent that.

But before we get into procedure, let's talk about why you're here.

Why Migrate in 2026? The Pricing Reality

If you're still running VMware ESXi and haven't renewed your support contract yet, you're in for a shock. Broadcom's restructuring of VMware licensing has pushed prices to levels that are simply unworkable for many SMBs and mid-market shops. Here's what the numbers actually look like:

Product Price (Per Core/Year) Notes
vSphere Foundation (VVF) — Multi-Year $150/core Multi-year subscription rate
vSphere Foundation (VVF) — 1-Year $190/core What most SMBs see at renewal
VMware Cloud Foundation (VCF) $350/core Confirmed via Broadcom exec interview
Minimum core commitment 72 cores Enforced April 10, 2025 (up from 16)
Late renewal surcharge +20% first year If you miss your anniversary date
Do the math: On a 3-host cluster with dual-socket servers (say, 2×20-core CPUs per host = 120 cores), you're looking at 120 cores × $190/core = $22,800/year for VVF at the 1-year rate. And that's if Broadcom accepts your core count — the 72-core minimum means you pay for 72 cores regardless.

A Spiceworks thread from January 2026 captures the SMB experience perfectly: one user reported their VMware Essentials+ renewal went from approximately $1,900/year to over $14,000/year for the equivalent vSphere Foundation tier. That's not a price increase — that's a different product category entirely.

According to community estimates across Spiceworks, r/vmware, and r/sysadmin, effective price increases since the Broadcom acquisition have ranged from 150% to over 1,000% depending on org size, licensing tier, and whether you got grandfathered into older terms.

Meanwhile, vSphere Standard — the entry-level product many SMBs relied on — reached end-of-sale on July 31, 2025. Renewals are pushing customers toward VVF.

Is Proxmox Right for You? (Be Honest)

Proxmox VE is excellent software. I'm not going to oversell it, and I'm not going to pretend it's a drop-in replacement for a full vSphere + vCenter stack. Here's the honest breakdown:

What Proxmox Does Well

What You'll Miss (Real Gaps)

My recommendation: If you're a homelab user, a startup, or an SMB running fewer than ~50 VMs without complex DR requirements, Proxmox is an excellent choice and the migration is manageable. If you're running a regulated environment with formal SLAs and existing Zerto/SRM DR runbooks, budget the time honestly — this is a multi-quarter project.

Pre-Migration Checklist (Do This Before Anything)

Every failed migration I've seen violated at least one item on this list. Burn it into your runbook.

  1. Power off the VM. The Proxmox ESXi Import Wizard does not live-migrate running VMs. The VM must be shut down on the ESXi side before you trigger the import.
  2. Remove ALL snapshots. Snapshots are the single most common cause of import failures. Consolidate and delete every snapshot on every VM you plan to migrate, in the vSphere Client, before touching Proxmox.
  3. Uninstall VMware Tools (Windows VMs). Do this while the VM is still running on ESXi. You'll clean up leftover drivers and services before they become a post-migration headache. Linux VMs typically handle this automatically.
  4. Pre-install VirtIO drivers (Windows VMs). Download the VirtIO ISO from the Fedora project, attach it to your Windows VM while it's still on ESXi, and install the drivers — particularly the storage driver. You won't need to boot into SATA mode if you do this first. [AFFILIATE: VirtIO Windows Drivers ISO]
  5. Note all IP settings and MAC addresses. Proxmox uses Linux bridge names (vmbr0, vmbr1) instead of vSphere virtual switches. NIC renames are common. If you're static IP'd, consider temporarily switching to DHCP before migration to avoid losing access immediately after cutover.
  6. Map your virtual switch topology. Document every vSphere Standard Switch, Distributed Switch, and port group with its VLAN ID. You'll recreate these as Linux bridges in Proxmox before the VMs come online.
  7. Map your storage. VMFS, NFS, and vSAN datastores need to become ZFS pools, Ceph HCI, LVM volumes, or NFS directories on Proxmox. Know your target before you start.
  8. Decrypt BitLocker volumes. If any Windows VMs have BitLocker enabled, decrypt them before migration. Hardware changes during the import can trigger recovery mode, and recovering BitLocker after a migration is a genuinely bad time.
ZFS + Hardware RAID: Pick One. This catches people who built hardware for VMware and are now repurposing it for Proxmox. ZFS requires direct disk access — it needs to manage its own redundancy. Hardware RAID controllers hide physical disks from the OS and present a virtual disk. Running ZFS on top of a hardware RAID volume eliminates most of ZFS's data integrity benefits. If you want ZFS (and you should — it's excellent), either put the controller in HBA/passthrough mode or replace it with an HBA. Reddit's r/Proxmox community thread on migration lessons learned is full of people who wish they'd known this before buying hardware.

Method 1: ESXi Import Wizard (The Primary Path)

Proxmox VE 8.2 introduced a built-in ESXi Import Wizard, and by Proxmox VE 9.x it's the recommended first-choice approach for most migrations. The process, as documented by Edy Werder's updated February 2026 guide and independently confirmed by the Hornetsecurity migration walkthrough, is genuinely three steps once your pre-migration checklist is complete.

Step 1: Add Your ESXi Host as a Storage Source

In the Proxmox Datacenter Manager, navigate to: Datacenter → Storage → Add → ESXi. Enter the IP address or hostname of your ESXi host, your ESXi credentials, and click the "Skip Certificate Verification" checkbox if you're using self-signed certs (you are). Give it an ID name like esxi-source.

Note on vCenter: You can point the wizard at a vCenter IP instead of a direct ESXi host. The wizard supports this, but community experience reports that vCenter-sourced migrations run slower. For most SMB environments, connecting directly to the ESXi host is faster and more predictable. For vCenter-managed clusters, test this in your environment before running production workloads through it.

Step 2: Browse the Datastore and Import

Once the ESXi storage source is added, you'll see it in your storage list. Browse to your VMs, select the target VM, and click Import. The wizard will prompt you to:

"Start Migration" checkbox: The wizard presents a "Start Migration" option that boots the VM while disk data is still streaming. This is sometimes described as "live migration" in documentation — it is not live-migrating a running VM. It means Proxmox starts the VM before the full disk transfer is complete. Unless you've tested this in a lab with your specific workload, leave it unchecked. Let the full disk transfer complete before booting.

Step 3: Boot and Validate

Once the import completes, boot the VM in Proxmox and validate: network connectivity, service startup, application health. For Windows VMs, expect to do additional work (see the Windows Gotchas section below). For Linux VMs, the experience is typically clean — the VM boots, network comes up, and you're done.

Method 2: OVF/OVA Export and Import (The Fallback)

If the Import Wizard isn't an option — you're on an older ESXi version, you're migrating from VMware Workstation, or you've hit a compatibility issue — the OVF/OVA path still works.

  1. Export from vSphere Client: Right-click the VM → Export OVF Template. This exports a .ovf descriptor and .vmdk disk files (or a single .ova archive).
  2. Transfer to Proxmox: SCP the files to your Proxmox host's local storage path (e.g., /var/lib/vz/template/ or your configured ISO/template directory).
  3. Import on Proxmox: Use qm importovf <vmid> <file.ovf> <storage-name> for OVF files, or qm importdisk <vmid> <file.vmdk> <storage-name> for raw VMDK disks.

The VMDK-to-QCOW2 conversion happens automatically during import. This method is slower than the Import Wizard (you're doing a full export then a full import), but it's reliable and gives you an offline backup of the VM in OVF format as a byproduct.

Windows VM Gotchas (The Section Most Guides Skip)

Linux VMs migrate cleanly the vast majority of the time. Windows VMs have a predictable set of issues, and if you know them in advance, none of them are serious. If you don't know them, you'll spend a frustrating afternoon Googling symptoms.

VirtIO Drivers: Install Before or After (But Install Them)

Proxmox uses the VirtIO paravirtualized driver stack for storage and networking — the same driver model used by KVM/QEMU across the board. Windows doesn't ship with VirtIO drivers. If you didn't pre-install them while the VM was on ESXi (step 4 of the pre-migration checklist), here's how to handle it post-migration:

  1. In the Proxmox VM configuration, temporarily set the disk bus to SATA instead of VirtIO SCSI. This lets Windows boot without the VirtIO storage driver.
  2. Boot the VM. Attach the VirtIO ISO as a virtual CD-ROM (download from fedorapeople.org). [AFFILIATE: VirtIO Windows Drivers]
  3. Run the VirtIO installer or manually install drivers from the ISO via Device Manager.
  4. Shut down, switch disk bus back to VirtIO SCSI in Proxmox config, and reboot. Performance will improve significantly.

VMware Tools Cleanup

If you didn't uninstall VMware Tools before migration, you'll have leftover services and drivers running in Windows. These won't necessarily crash the VM, but they'll generate event log noise and the services will fail to start. The Proxmox forums have a community PowerShell script for cleaning up VMware Tools remnants post-migration — search "remove VMware Tools after ESXi Proxmox migration Windows" on the official Proxmox forum for the current thread.

Windows License Reactivation

Changing virtual hardware — which a migration to a different hypervisor absolutely does — can trigger Windows license reactivation. This is common, but the severity depends on your license type:

Install QEMU Guest Agent

After migration, install the QEMU Guest Agent inside each migrated VM. On Windows, it's available on the VirtIO ISO. On Linux, install qemu-guest-agent via your package manager. This replaces VMware Tools' OS-aware functions: clean shutdown from the Proxmox UI, IP address reporting, filesystem freeze for consistent snapshots.

Proxmox Backup Server: The Hidden Win

One piece of this migration that deserves its own callout: Proxmox Backup Server (PBS). It's a separate, free product from Proxmox that provides enterprise-grade VM backup with:

In the Virtualization Howto year-long comparison, PBS was described as "a totally free enterprise data protection solution that VMware doesn't have at this price point." VMware's equivalent backup capabilities require vSphere Data Protection or a third-party product like [AFFILIATE: Veeam Backup for Proxmox] — which is now a real product, with Veeam adding official Proxmox VE support in 2025.

Consider running PBS on a small dedicated box. An Intel N100 mini PC with 4–8 TB of storage makes an excellent PBS appliance for under $300. This gives you network-isolated backup infrastructure that's independent of your Proxmox cluster. [AFFILIATE: Beelink EQ12 mini PC]

What Does Proxmox Actually Cost?

Proxmox VE is free to download and use with no feature restrictions. The subscription tiers add access to the enterprise package repository (which gets tested, stable updates before the community repo) and varying levels of support. Pricing is per physical CPU socket — not per core, not per VM. All pricing sourced from proxmox.com/pricing:

Tier Price (per socket/year) Support Best For
Free (Community) €0 Community forums only Homelab, dev/test, cost-conscious production
Community Subscription €115/socket/year Enterprise repo access, no tickets Production: stable updates without support calls
Basic Subscription €355/socket/year Enterprise repo + 3 tickets/year SMB production with occasional support needs
Standard Subscription €530/socket/year Enterprise repo + more tickets Mid-market environments
Premium Subscription €1,060/socket/year 24/7 enterprise support Enterprise with SLA requirements

Let's put that into a real comparison. Take a 3-host cluster, each with two physical CPU sockets (6 sockets total):

Scenario Annual Cost Notes
VMware VVF @ $190/core (1-yr), 72-core minimum $13,680/year Minimum order; 3 hosts at 72-core floor
VMware VVF @ $190/core, 3×2×20-core hosts (120 cores) $22,800/year Realistic dual-socket Xeon environment
Proxmox Community (free) $0/year Fully functional, community repo
Proxmox Community Subscription (6 sockets) ~$750/year €115 × 6, enterprise repo, stable updates
Proxmox Basic Subscription (6 sockets) ~$2,300/year €355 × 6, includes 18 support tickets/year
Proxmox Premium (6 sockets) — worst case ~$6,800/year €1,060 × 6, full 24/7 enterprise support

Even at Proxmox's most expensive tier — Premium, with 24/7 enterprise support — you're looking at roughly a third of what VMware VVF costs at list price on a 1-year subscription. In practice, most SMBs will land somewhere between Community and Basic, making the cost delta even more dramatic.

Real-World Scale: What Others Have Done

I want to give you some grounding from the community before you decide this is all theory.

A December 2024–February 2025 migration posted to r/Proxmox covered 65 VMs migrated from vSphere 7. The user characterized it as a 2-month project — not a weekend. Much of that time was spent on post-migration validation and application testing, not the actual import process.

A January 2026 Spiceworks discussion covered a 3-host Dell R640 environment with vCenter 8.03, 30 VMs on shared Synology iSCSI storage. The team was in evaluation, specifically comparing Proxmox against Hyper-V as VMware alternatives.

The CTO Advisor's November 2025 analysis deserves a direct quote: what organizations plan as a 6-month migration "just became 24 months" once they properly account for DR orchestration re-engineering, monitoring re-integration, and runbook rewrites. That's not a reason not to migrate — it's a reason to scope honestly and not tell your CIO it'll be done by Q2.

Proxmox VE 9.x: Current State (March 2026)

Proxmox VE 9.0 was released August 5, 2025, with notable improvements relevant to ESXi migrants:

PVE 8.4 continues to receive security and critical patches through August 2026 for those who haven't upgraded yet. The community subscription tiers include enterprise repo access, which delivers tested, stable package updates before they hit the community repo.

ESXi Compatibility Note: The Import Wizard has been widely tested against ESXi 7.x and 8.x. ESXi 6.x compatibility is not well-documented in current sources — if you're running 6.x, test the wizard against a non-production VM before relying on it for your migration. The OVF/OVA method is a reliable fallback regardless of ESXi version.

What to Do After Migration

Once VMs are running on Proxmox, the work isn't quite done. Here's the post-migration checklist:

  1. Install QEMU Guest Agent in every VM. Windows: use the VirtIO ISO installer. Linux: apt install qemu-guest-agent or equivalent. Enable it in the Proxmox VM Options panel.
  2. Switch Windows VMs to VirtIO SCSI and VirtIO Network. If you used SATA mode to get the VM booting, switch to VirtIO SCSI after drivers are installed for full performance.
  3. Set up Proxmox Backup Server. Don't leave freshly migrated VMs without backup coverage. PBS is free — deploy it.
  4. Rebuild your monitoring stack. Prometheus with the pve-exporter is the standard community approach. Grafana dashboards for Proxmox are widely available.
  5. Configure your VM placement and HA groups. Without DRS, you'll need to think about which hosts run which VMs for capacity balance. Set up HA groups for VMs that need automatic restart on host failure.
  6. Clean up the ESXi side. Once VMs are validated in Proxmox, reclaim the ESXi storage. Don't leave half-migrated datastores sitting around.

The Honest Answer: What You'll Miss

I'm not going to end this with "Proxmox is better in every way." It isn't. Here's what you'll genuinely miss from a mature vSphere environment:

What you'll gain: a hypervisor with no per-core licensing, a backup product that genuinely competes with commercial offerings, and a community that's been growing rapidly since 2024 precisely because everyone is running this migration analysis right now.

Resources mentioned in this guide:

Stay ahead of the VMware changes

We're publishing detailed comparison guides, migration walkthroughs, and cost calculators. Get them in your inbox.

🖥️

Written by Rob Notaro

Senior infrastructure engineer specializing in VMware, Horizon VDI, and enterprise virtualization. Currently deploying Horizon 2512 and VCF in production environments.