HomeVCF 9 Nested Lab Setup: The Complete Walkthrough
VCF & Architecture

VCF 9 Nested Lab Setup: The Complete Walkthrough

March 30, 2026 · 15 min read

VCF 9 Nested Lab Setup: The Complete Walkthrough

Meta Description: Build a fully functional VMware Cloud Foundation 9 nested lab on a single workstation. Step-by-step guide covering ESXi 9.0, VCF Fleet architecture, vSAN ESA, and vSphere Kubernetes Service — with real hardware requirements and critical lab-only workarounds.

Target Keywords: VCF 9 nested lab, VMware Cloud Foundation 9 home lab, nested ESXi 9 setup, VCF 9 SDDC Manager, vmware lab 2025

Introduction: Nested Virtualization Grows Up

In the early days of virtualization, running a "nested" environment — an entire enterprise infrastructure stack simulating production inside a single physical host — was a niche experiment. Today it's the standard path for VMware Certified Professional (VCP) candidates, infrastructure architects, and anyone who wants to explore VMware Cloud Foundation (VCF) without a $50,000 hardware bill.

VMware Cloud Foundation 9.0, released June 2025, represents the biggest architectural leap since Broadcom's acquisition. It ships with ESXi 9.0, replaces the old Cloud Builder with a unified VCF Installer appliance, and introduces the VCF Fleet management model — consolidating what was previously a patchwork of SDDC Manager, NSX Manager, vCenter, and Aria/vROps into a coherent hierarchy. If you last played with VCF 5.x, you're in for a surprise.

This guide walks you through building a fully functional, nested VCF 9 environment on a single host — verified against William Lam's official lab resources, Broadcom's TechDocs, and real community deployments.

Quick note on licensing: As of December 2025, VMUG Advantage members with VCP-VCF certification can access VCF 9.0 tokens directly from the VMUG Advantage portal. This is the fastest legitimate path to a licensed lab. Annual VMUG Advantage membership runs ~$200/year.

Part I: Know Before You Build

The New VCF 9 Architecture

VCF 9 drops the old "Bring Up" + SDDC Manager model. The new structure is:

This matters for your lab: if you're following older VCF 5.x guides, the UI flows, JSON schemas, and component names have changed significantly.

[SCREENSHOT: VCF 9 Fleet architecture diagram showing the hierarchy of Fleet Manager, VCF Instance, vCenter, and NSX]

Hardware Requirements — The Real Numbers

This is where the 9B model draft got it badly wrong. The full VCF 9.0 Simple deployment requires:

| Component | vCPU | Memory | Storage (provisioned) | Actual Used |

|-----------|------|--------|-----------------------|-------------|

| vCenter Server | 4 | 21 GB | 642 GB | ~30 GB |

| NSX Manager | 6 | 24 GB | 300 GB | ~14 GB |

| SDDC Manager / VCF Operations Fleet Mgr | 4 | 16 GB | 931 GB | ~77 GB |

| VCF Operations | 4 | 16 GB | 290 GB | ~13 GB |

| VCF Operations Fleet Manager | 4 | 12 GB | 206 GB | ~117 GB |

| VCF Operations Collector | 4 | 16 GB | 280 GB | ~11 GB |

| VCF Automation (VCFA) | 24 | 96 GB | 626 GB | ~72 GB |

| TOTALS | ~48 | ~194 GB | ~3.2 TB | ~330 GB |

(Source: William Lam — Minimal Resources for Deploying VCF 9.0 in a Lab)

The critical constraint is VCF Automation (VCFA): it requires a 24 vCPU VM. Your physical host must have at least 12 cores / 24 threads to deploy it. Many popular home lab mini-PCs (GMKtec K11, NUC 12s with 8-core CPUs) will hit this wall.

If you can't meet the 24 vCPU minimum, Broadcom's Tomas Fojta has confirmed you can manually edit the VCFA VM down to 16 vCPU (or even 12) via vSphere Client — YMMV on performance. Memory cannot be reduced without breaking VCFA entirely.

Practical minimums for a nested lab:

Community hardware picks: The Minisforum MS-A2 (16C/32T AMD Ryzen 9) runs full VCF 9 including VCFA. Two units needed for a 2-node vSAN cluster with comfortable headroom. The ASUS NUC 15 Pro is another solid option. See William Lam's community survey for 50+ tested configurations.

CPU Compatibility Warning

ESXi 9.0 drops support for several older Intel CPU generations that ESXi 8.x still handled:

If you have one of these, ESXi 9.0 will block installation by default. You can override with the kernel boot option allowLegacyCPU=true (SHIFT+O during boot) — for lab use only, not Broadcom-supported. Note: XSAVE CPU instruction support is also mandatory and cannot be bypassed.

[SCREENSHOT: ESXi 9.0 installer showing the allowLegacyCPU warning message]

Part II: Planning the Nested Architecture

Topology: Single-Node vs. Multi-Node

For most lab use cases — cert prep, feature exploration, PoC testing — a Single-Node deployment is the right call. VCF 9 simplifies this significantly vs. VCF 5.x:

VCF 9 now requires a minimum of 3 ESXi hosts for vSAN (OSA or ESA), or 2 ESXi hosts for external storage (FC/NFS). This is an improvement over VCF 5.x's 4-host vSAN minimum. For a single physical host lab, you'll use the single-host override (covered in the deployment steps).

Two Nested Lab Flavors

  1. 1. vSphere Foundation (VVF) Lab — ESXi + vCenter + basic networking and storage. Good for VCP-DV prep with lighter hardware demands.
  2. 2. Full VCF 9 Lab — Complete stack: ESXi 9, NSX, VCF Operations, VCF Automation, vSphere Kubernetes Service. Requires the full 194 GB RAM+ commitment. Essential for VCF-related certifications.

Critical: What Does NOT Work in Nested VCF 9

Before diving into deployment, know these hard limits:

| Feature | Nested Compatible? | Notes |

|---------|-------------------|-------|

| vSAN OSA | ✅ Yes | Thin provisioned against host datastore |

| vSAN ESA | ✅ Yes (with workaround) | Requires vSAN ESA Hardware Mock VIB |

| NVMe Tiering | ❌ No | Incompatible with nested virtualization entirely |

| IDE Controller on nested ESXi VMs | ❌ No | Removed in ESXi 9.0; use SATA or SCSI |

| NSX Edge on AMD Ryzen (physical host) | ⚠️ Workaround | Requires PowerCLI remediation script |

Network Planning

Network misconfiguration is the #1 reason nested labs fail. Plan three segments before touching an installer:

| VMkernel / Segment | Purpose | Recommended VLAN |

|-------------------|---------|-----------------|

| Management | vCenter / VCF Operations APIs, SSH | VLAN 10 |

| vSAN | Storage traffic between nested hosts | VLAN 20 |

| NSX Transport | NSX control plane and data plane | VLAN 30 |

On the foundation host, set port groups for NSX transport to Promiscuous Mode = Accept. Without this, nested nodes cannot communicate through NSX overlays.


# Verify promiscuous mode via ESXCLI
esxcli network vswitch standard portgroup policy security get \
  --portgroup-name "NSX-Transport"

Ensure MTU is consistent at 1500 across your lab unless your physical switch explicitly supports jumbo frames end-to-end.

Part III: Step-by-Step Deployment

Step 1 — Install ESXi 9.0 on the Foundation Host

Download ESXi 9.0 from the Broadcom Support Portal (VMUG Advantage members: use the VMUG portal instead). Boot from USB or ISO.

During install, dedicate one NIC for management and leave others available for nested VM bridging. After booting, set a static IP via ESXCLI:


# Configure management IP (adjust to your network)
esxcli network ip interface ipv4 set \
  -i vmkernel0 \
  -I 192.168.1.10 \
  -N 255.255.255.0 \
  -t static

esxcli network ip route ipv4 add \
  --network default \
  --gateway 192.168.1.1

# Verify connectivity
vmkping -I vmkernel0 192.168.1.1

[SCREENSHOT: ESXi 9.0 DCUI showing successful network configuration with static IP]

Step 2 — Critical Foundation Host Tuning

Before deploying any VCF components, tune the foundation host for nested workloads. This step is consistently missing from beginner guides.

Enable Huge Pages — nested ESXi instances rely heavily on huge pages for memory management. Without this, you'll hit "Insufficient Memory" errors before VCF even finishes initializing:


# Enable Huge Pages (requires host reboot to take effect)
esxcli system settings advanced set \
  -o /VirtualMem/HugePages/Enabled \
  -i 1

# Verify setting
esxcli system settings advanced list -o /VirtualMem/HugePages/Enabled

Set Promiscuous Mode on NSX port groups:


# Via PowerCLI (run from management workstation)
Get-VirtualPortGroup -Name "NSX-Transport" | `
  Set-SecurityPolicy -AllowPromiscuous $true -ForgedTransmits $true -MacChanges $true

Configure Virtual Switch for Nested Traffic:


# ESXCLI: Create a vSwitch for nested lab traffic
esxcli network vswitch standard add -v vSwitch-Lab
esxcli network vswitch standard set -v vSwitch-Lab -m 1500

# Add port groups for each segment
esxcli network vswitch standard portgroup add -p "MGMT-Nested" -v vSwitch-Lab
esxcli network vswitch standard portgroup add -p "vSAN-Nested" -v vSwitch-Lab
esxcli network vswitch standard portgroup add -p "NSX-Nested" -v vSwitch-Lab

[SCREENSHOT: vSphere Client showing the vSwitch-Lab configuration with three port groups]

Step 3 — Deploy the VCF 9 Installer Appliance

In VCF 9, the old Cloud Builder is gone. It's replaced by the VCF Installer (OVA filename: VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova). This appliance handles the initial bringup and can optionally transform itself into the VCF Operations Fleet Manager post-deployment.

Deploy via OVFTool or the vSphere Client OVA deploy wizard:


# Deploy via OVFTool
ovftool \
  --name="VCF-Installer" \
  --diskMode=thin \
  --datastore="datastore1" \
  --network="MGMT-Nested" \
  --prop:guestinfo.hostname="vcf-installer.lab.local" \
  --prop:guestinfo.ip0="192.168.10.10" \
  --prop:guestinfo.netmask0="255.255.255.0" \
  --prop:guestinfo.gateway="192.168.10.1" \
  --prop:guestinfo.DNS="192.168.10.53" \
  VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova \
  "vi://root@192.168.1.10/"

Once powered on, access the VCF Installer UI at https://.

[SCREENSHOT: VCF 9 Installer login screen — notably different from the VCF 5.x Cloud Builder UI]

Step 4 — Single-Host Override (Lab Only)

By default, VCF 9 requires a minimum of 3 ESXi hosts for vSAN or 2 for external storage. For a single physical machine lab, apply this unsupported-but-community-validated override:


# SSH into the VCF Installer appliance
# For VCF 9.0.1:
echo "feature.vcf.vgl-29121.single.host.domain=true" >> /home/vcf/feature.properties

# For VCF 9.0.0 (original GA):
echo "feature.vcf.internal.single.host.domain=true" >> /home/vcf/feature.properties

# Restart VCF Installer services
echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
⚠️ Disclaimer: This override is not officially supported by Broadcom. Use exclusively for lab/learning purposes. Do not apply to production environments.

After restarting services, use the JSON deployment method instead of the interactive wizard — the UI still enforces the 3-host minimum at the front-end level. Upload your JSON with a single host reference and proceed directly to validation.

[SCREENSHOT: VCF 9 Installer JSON upload interface, showing single-host configuration passing validation]

Step 5 — Deploy the VCF Fleet

With the installer configured, proceed through the bringup:

  1. 1. Specify ESXi hosts — enter your nested ESXi host(s) with their credentials
  2. 2. Choose storage: vSAN (OSA or ESA) or external. For nested labs, vSAN ESA is preferred — it uses NVMe-class storage policies and gives better lab parity with production. Apply the vSAN ESA Hardware Mock VIB if your disks aren't on the HCL.
  3. 3. Network topology — assign your pre-created port groups to the management, vSAN, and NSX transport roles
  4. 4. Licenses — enter your VMUG Advantage VCF 9 token
  5. 5. Deploy — the installer provisions vCenter, NSX Manager, SDDC Manager, and VCF Operations in sequence. Budget 90–120 minutes.

# Monitor deployment progress (SSH to installer appliance)
tail -f /var/log/vmware/vcf/vcf-bringup/vcf-bringup.log | grep -E "(INFO|WARN|ERROR)"

[SCREENSHOT: VCF Installer deployment progress screen showing component-by-component status]

Step 6 — Verify the VCF Fleet

Once bringup completes, log in to the VCF Operations UI (formerly the SDDC Manager UI):


https://<vcf-operations-fleet-manager-ip>/ui

You should see:

[SCREENSHOT: VCF Operations Fleet Management dashboard showing a healthy single-host VCF 9 deployment]

Step 7 — Add vSphere Kubernetes Service (Optional)

VCF 9 ships with vSphere Kubernetes Service (VKS) — the renamed successor to Tanzu Kubernetes Grid (TKG). Enabling it requires the Supervisor to be activated on your vSphere cluster.

From vCenter, navigate to Workload Management > Enable:

  1. 1. Select your VCF cluster
  2. 2. Choose the NSX networking option (required for full VKS functionality)
  3. 3. Configure storage policies for Persistent Volumes
  4. 4. Deploy a Supervisor — this creates the management Kubernetes control plane embedded in ESXi

Once the Supervisor is healthy, you can deploy Kubernetes workload clusters via kubectl or the vSphere Client:


# Login to Supervisor with kubectl (vSphere Plugin required)
kubectl vsphere login \
  --server=https://<supervisor-ip> \
  --vsphere-username=administrator@vsphere.local \
  --insecure-skip-tls-verify

# Create a vSphere Namespace
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: lab-workloads
EOF

# Deploy a Kubernetes cluster in that namespace (VKS)
kubectl apply -f vks-cluster.yaml
Resource caveat: Enabling VKS adds significant memory overhead. Only attempt this if your foundation host has 256 GB+ RAM available.

[SCREENSHOT: vCenter Workload Management showing a healthy Supervisor with one VKS workload cluster deployed]

Part IV: Common Issues & Verified Fixes

Issue 1: "Insufficient Memory" During Nested Deployment

Symptom: VCF bringup fails when spinning up nested ESXi VMs; VCFA fails to power on.

Root causes:

Fix:


# Confirm Huge Pages enabled and active
esxcli system settings advanced list -o /VirtualMem/HugePages/Enabled
# Should return "Int Value = 1"

# Check memory pressure
esxcli system memory list
# Look for "Consumed" vs "Total" — >90% = trouble

If memory is genuinely tight, reduce VCFA vCPU to 16 via vSphere Client (Settings → Edit Settings → CPU). Memory reduction is not viable — the 96 GB is a hard requirement.

Issue 2: Nested ESXi Boot Fails — "Failed to Locate Kickstart on CD-ROM"

Symptom: Nested ESXi 9.0 VMs fail to boot with a kickstart error referencing the CD-ROM.

Cause: ESXi 9.0 dropped support for the IDE Controller. If your nested ESXi VM templates still have an IDE CD-ROM device, they will fail to locate the installation media.

Fix: Remove the IDE controller from your nested ESXi VM config. Use SATA or SCSI for any virtual disks, and attach the ESXi ISO via a SATA CD-ROM device:


# Check VM's controller configuration via PowerCLI
Get-VM "Nested-ESXi-01" | Get-HardDisk | Select Name, StorageFormat, DiskType
# Remove any IDE-attached devices and re-attach as SATA/SCSI

Issue 3: Network Connectivity Lost Between Nested Nodes

Symptom: Nested hosts can't ping each other; vMotion fails; NSX shows transport nodes as Disconnected.

Fix checklist:


# 1. Confirm Promiscuous Mode = Accept on NSX port groups
esxcli network vswitch standard portgroup policy security get \
  --portgroup-name "NSX-Nested"
# macChanges, forgedTransmits, allowPromiscuous should all = true

# 2. Check MTU consistency
esxcli network vswitch standard list
# All vSwitches should show same MTU (typically 1500)

# 3. Check VLAN tagging
esxcli network vswitch standard portgroup list
# Ensure correct VLAN IDs match your physical switch config

Issue 4: vSAN ESA Shows "No Certified Disks Found"

Symptom: VCF Installer fails vSAN ESA validation with disk HCL errors.

Fix: Deploy William Lam's vSAN ESA Hardware Mock VIB — compatible with both VCF 5.x and 9.0, works on physical and nested ESXi hosts:


# Install the VIB on each ESXi host (SSH in first)
esxcli software vib install -v \
  /tmp/VMware_bootbank_vsan-esa-mock_x.x.x.x-xxxxxxxx.vib

# Verify installation
esxcli software vib list | grep vsan-esa-mock

(Download from William Lam's blog: vSAN ESA Disk HCL Workaround for VCF 9.0)

Issue 5: NSX Transport Node Failures on AMD Ryzen Hosts

Symptom: NSX Edge services fail to deploy on a physical AMD Ryzen host.

Fix: Apply Broadcom's officially documented PowerCLI remediation script for AMD Ryzen NSX Edge compatibility:


# PowerCLI remediation for AMD Ryzen NSX Edge
# Full script: https://williamlam.com/2025/06/powercli-remediation-script-for-running-nsx-edge-on-amd-ryzen-for-vcf-9-0.html

Connect-VIServer -Server "vcenter.lab.local" -User "admin" -Password "VMware1!"
# [Apply per the blog post's script — sets required CPU feature masks on Edge VMs]

Conclusion: What You've Built

If you've followed this guide to completion, you're running:

This is a genuine, production-identical environment for:

Next Steps

1. Automate with PowerCLI:


# Script your VCF bringup JSON for reproducible lab rebuilds
$vcfBringupSpec = @{
  sddcManagerSpec = @{
    hostname = "vcf-mgr.lab.local"
    ipAddress = "192.168.10.10"
  }
  # ... full spec
} | ConvertTo-Json -Depth 10

2. Practice Lifecycle Management: Use the VCF Installer to upgrade from 9.0.0 to 9.0.1 — understanding the update cadence is itself a certification exam topic.

3. Explore NVMe Tiering on Physical: If you have a second physical host, NVMe Tiering (incompatible with nested) becomes available and dramatically changes vSAN performance characteristics. Worth a standalone lab post.

Resources

Last verified: March 2026. VCF 9.0.1 current as of publication. Always check Broadcom's release notes for the latest version compatibility information.

Stay ahead of the VMware changes

We're publishing detailed comparison guides, migration walkthroughs, and cost calculators. Get them in your inbox.

🖥️

Written by Rob Notaro

Senior infrastructure engineer specializing in VMware, Horizon VDI, and enterprise virtualization. Currently deploying Horizon 2512 and VCF in production environments.