HomevSphere Foundation vs VCF
Licensing & Architecture

vSphere Foundation vs VCF: Real Pricing and Which One to Deploy (2026)

March 30, 2026 · 10 min read

VCF is overkill for 90% of shops under 50 hosts. You're paying for NSX and a full automation stack you'll never configure properly. Buy vSphere Foundation (VVF) and put the savings toward storage — if you can still get VVF quoted, which is becoming its own problem.

That's my short answer. The longer version depends on your environment, and there's a real pricing trap in the comparison that the marketing materials won't show you. Let me break it down.

---

What Broadcom Actually Sells Now

After the acquisition, Broadcom collapsed the VMware catalog into two main bundles. Everything else — vSphere Standard, per-instance Aria subscriptions, standalone vSAN licenses — is gone or end-of-sale.

As of early 2026, vSphere Standard 8 hit end-of-sale (July 31, 2025). If you're on Standard today, your next renewal path is either VVF or VCF. There's no third option unless you migrate off VMware entirely.

FeaturevSphere Foundation (VVF)VMware Cloud Foundation (VCF)
vSphere / vCenter✅ Enterprise Plus tier✅ Enterprise Plus tier
vSAN✅ 0.25 TiB/core included✅ 1 TiB/core included
NSX✅ Full NSX included
SDDC Manager / VCF Operations
Aria Operations (monitoring)✅ (basic)✅ (full suite)
Aria Automation
Kubernetes (vSphere Kubernetes Service)✅ single supervisor cluster✅ full
HCX (migration)
MSRP (per core/yr, 1-year)~$190~$350
Minimum purchase72 cores per SKU72 cores per SKU
Minimum per CPU socket16 cores16 cores
Sources: Broadcom KB313548 (core counting), r/vmware MSRP thread (March 2025), licenseware.io minimum purchase analysis (May 2025)

The price delta looks big in that table but understates the real gap once you account for what NSX actually costs to implement and run.

---

The Real Math on Minimum Spend

Both VVF and VCF carry a 16-core minimum per CPU socket and a 72-core minimum purchase per SKU (as of the Broadcom May 2025 price book update). You can't buy 20 cores of VCF and call it done.

A typical 3-host cluster, dual-socket, 16-core CPUs per socket (96 cores total):

VVF (3-year commitment):
  96 cores × $150/core/yr × 3 years = $43,200

VCF (3-year commitment, negotiated ~$264/core): 96 cores × $264/core/yr × 3 years = $76,032

That's a $32,832 difference on a modest cluster. On a 10-host cluster, you're looking at $100K+ delta over 3 years.

And VVF's vSAN entitlement is tight. At 0.25 TiB/core, that same 96-core cluster gets 24 TiB of included vSAN capacity. If your production workloads need more, you're buying vSAN Capacity Add-On licenses — which aren't free.

# Calculate your vSAN entitlement under VVF
# Total licensed cores × 0.25 = included TiB (rounded up per Broadcom KB313548)
$licensedCores = 96
$vSanTiB = [math]::Ceiling($licensedCores * 0.25)
Write-Host "Included vSAN capacity: $vSanTiB TiB"
# Output: Included vSAN capacity: 24 TiB

VCF gives you 1 TiB/core — same cluster gets 96 TiB included. If you're building vSAN-heavy workloads at scale, that matters.

[Image: Broadcom store showing VVF vs VCF SKU pricing breakdown, MSRP per-core for 1-year and 3-year terms]

---

VCF 9 Minimum Hardware — The 7-Node Myth

The "7-node minimum" for VCF (4 management + 3 workload) was true for VCF 5.x. VCF 9 changed this.

VCF 9 actual minimums (per William Lam's June 2025 lab guide and Broadcom's installer):

If you're doing capacity planning against VCF 5.x docs, re-read the VCF 9 requirements. The floor dropped.

# VCF 9.0 installer — verify host count pre-check output
# Minimum 3 hosts for vSAN OSA/ESA, 2 for external storage
cat /var/log/vmware/vcf/installer/vcf-installer.log | grep "host-count-validation"

That said: running a VCF management domain on 3 nodes with vSAN leaves zero fault tolerance headroom during patching. If a host needs to enter maintenance mode for a VCF lifecycle upgrade, you're down to 2 active vSAN nodes. Minimum 4 hosts is still the right answer for any production cluster you actually care about.

---

VVF: Where It Makes Sense

VVF is ESXi + vCenter + vSAN + one vSphere Kubernetes Service supervisor cluster. No NSX, no lifecycle automation tooling, no Aria Automation. You manage everything with the tools you already know.

This is the right choice when:

Manual storage management under VVF — scanning hosts and existing datastores:

# Scan for connected storage adapters and existing datastores on VVF host
$vmHost = Get-VMHost "esxi01.yourdomain.local"
$esxcli = Get-EsxCli -VMHost $vmHost -V2

# List storage devices $esxcli.storage.core.device.list.Invoke() | Select-Object Device, DisplayName, Size, IsLocal | Where-Object { -not $_.IsLocal } | Format-Table -AutoSize

# List existing VMFS datastores Get-Datastore -VMHost $vmHost | Select-Object Name, FreeSpaceGB, CapacityGB, Type | Format-Table -AutoSize

# ESXi shell — confirm vSAN health and disk group membership
esxcli vsan storage list
esxcli vsan health cluster get
The VVF storage caveat: You get 0.25 TiB/core of vSAN. On a dual-socket 16-core host, that's 8 TiB per host in a cluster. For dev/test workloads that's usually fine. For production databases or large VDI pools, you'll hit the cap faster than you expect. Run the math before you sign the contract.

[Image: vSphere Client showing VVF host datastore inventory — external NFS and local VMFS datastores visible alongside vSAN]

---

VCF: Where It Pays Off

VCF bundles everything — vSphere, vSAN, NSX, and VCF Operations (the umbrella that replaced SDDC Manager as the management plane in VCF 9). One lifecycle toolchain, one upgrade workflow, one compliance audit trail.

In VCF 9, SDDC Manager still ships as a component but its orchestration role moved to VCF Operations Fleet Management — a centralized multi-instance control plane. If you're upgrading from VCF 5.x, you'll notice SDDC Manager is still there but increasingly becomes a passthrough.

This is the right choice when:

VCF enforces the Hardware Compatibility List hard for vSAN ESA (the new flash-optimized storage architecture). If a component fails HCL validation, the installer stops. In VCF 9.0.1, Broadcom added a bypass for non-production use (per William Lam's September 2025 blog post), but in production you either have compliant hardware or you don't run ESA.

# Check vSAN HCL status before starting a VCF 9 upgrade
# The HCL database must be under 90 days old
stat /nfs/vmware/vcf/nfs-mount/vsan-hcl/all.json | grep Modify

# If outdated, download current HCL before running the installer # Source: https://vvs.broadcom.com/service/vsan/all.json

Lifecycle policy for a VCF cluster — upgrade one host at a time:

# VCF Operations lifecycle configuration (VCF 9)
apiVersion: vcf.vmware.com/v1
kind: LifecyclePolicy
metadata:
  name: production-cluster-upgrade
spec:
  clusterRef: prod-vcf-cluster-01
  strategy:
    type: Rolling
    parallelHosts: 1
    maintenanceWindow:
      enabled: true
      cronExpression: "0 2 * * 0"  # Sunday 2 AM
  rollbackOnFailure: true
---

The VVF Discontinuation Risk

This deserves a direct mention: as of early 2026, multiple r/vmware threads report that Broadcom sales reps are steering customers away from VVF. Some shops are having difficulty getting VVF quoted at all, with reps quoting VCF instead.

A November 2025 r/vmware thread included a rep's statement that "VVF will be phased out within 18 months." Broadcom hasn't made an official announcement, but the pattern matches what happened to vSphere Standard (end-of-sale July 2025 with no replacement below VVF).

If you're building a 3-5 year infrastructure plan, factor this in. VVF might not be available at your next renewal. That's not a reason to overbuy VCF today — if VVF pricing is right for your current environment, use it — but it's a reason to architect for a migration path to VCF without needing to rip out NSX-incompatible network configs when the time comes.

---

Troubleshooting Common Issues

"Insufficient memory for vCenter Server Appliance deployment" VCSA needs 32 GB of headroom on the target host. In a VVF environment where you're running everything on shared hosts, this is usually a host overcommit problem. Check memory balloon and swap counters before deploying:
Get-VMHost "esxi01.yourdomain.local" | 
    Get-Stat -Stat mem.swapused.average -Start (Get-Date).AddHours(-1) |
    Measure-Object -Property Value -Average |
    Select-Object Count, Average

If you're hitting swap, fix the overcommit before adding more management VMs.

"HCL Compliance Check Failed" (VCF vSAN ESA)
vSAN ESA certified disks not found on ESXi host [hostname]

First check whether your HCL database is stale — the VCF installer stops if all.json is over 90 days old (Broadcom KB412606). Download the current file before retrying:

curl -o /nfs/vmware/vcf/nfs-mount/vsan-hcl/all.json \
    https://vvs.broadcom.com/service/vsan/all.json

If the hardware is genuinely non-compliant with vSAN ESA, you have two options: use vSAN OSA (original storage architecture, less strict HCL), or run VCF on external storage instead of vSAN. If neither works, VVF doesn't enforce HCL the same way.

"vMotion failed: source and destination hosts cannot communicate reliably" MTU mismatch. vMotion network needs end-to-end jumbo frames or end-to-end standard frames — not a mix.
# Check NIC MTU on ESXi host
esxcli network nic list | grep -E "Name|MTU"

# Check vmkernel adapter MTU esxcli network ip interface list | grep -A5 vmk

In VVF, check the vMotion port group's VLAN and MTU settings in vDS. In VCF, check VCF Operations Fleet Management — network config drift from the desired state shows up there before it becomes an incident.

"vSAN datastore shows Degraded health after host count drops below 3" You've hit the minimum fault domain floor. vSAN needs a minimum of 3 hosts for standard cluster resiliency. Get the third host back online before doing anything else. Don't try to change the cluster configuration while degraded.

[Image: vSphere Client vSAN health dashboard showing degraded status and host connectivity state]

---

My Recommendation

Under 20 hosts: VVF. No question. Even with the discontinuation risk, you pay roughly half the per-core cost, you skip the NSX learning curve, and the operational model stays in the tooling your team already knows. If VVF gets end-of-saled in the next 18 months, you'll have time to evaluate VCF on a planned refresh cycle, not emergency terms. 20-50 hosts, existing NSX-T: Evaluate VCF seriously. If you're already paying for NSX separately, the delta between VVF and VCF shrinks and the SDDC Manager/VCF Operations lifecycle automation starts paying back in engineer time. 50+ hosts, dedicated networking team: VCF. The lifecycle management alone justifies the cost at that scale. Running patching cycles across 50+ hosts without lifecycle automation is where SR incidents happen. ---

Affiliate Hardware

The hardware choices that matter most in a VCF deployment are vSAN-certified hosts. Both of these are on the VCF 9 HCL and certified for vSAN ESA:

For VVF environments on external storage, any server meeting VMware's vSphere compatibility list works — you're not locked into vSAN ESA certified hardware.

---

*Pricing as of March 2026. Broadcom changes list prices quarterly — verify against the current price book before quoting. Core counting rules: Broadcom KB313548. VCF 9 minimum host requirements: williamlam.com/2025/06.*

Stay ahead of the VMware changes

We're publishing detailed comparison guides, migration walkthroughs, and cost calculators. Get them in your inbox.

🖥️

Written by Rob Notaro

Senior infrastructure engineer specializing in VMware, Horizon VDI, and enterprise virtualization. Currently deploying Horizon 2512 and VCF in production environments.