VMware to Proxmox Migration

[+] Status: Completed [+] Origin: Infrastructure Project [+] Date: 2025.08
>> TECH_STACK:
[Proxmox VE][NETLAB+][10G Networking][Linux][KVM/QEMU]

Led the complete infrastructure migration of our educational lab environment from VMware ESXi to Proxmox VE. This wasn't a team effort with delegated tasks; I was the workhorse. I physically installed hardware, configured every network port, built the cluster, broke it, rebuilt it, documented everything, and delivered a production-ready platform. The project eliminated six figures in annual licensing costs while modernizing the entire network architecture.

In November 2023, Broadcom completed its $61 billion acquisition of VMware and immediately began dismantling the licensing models that educational institutions depended on:

Nov 2023 Broadcom acquires VMware $61B acquisition closes; licensing changes announced within weeks
Early 2024 Perpetual licenses discontinued Subscription-only model; per-core pricing replaces per-socket
Apr 2024 72-core minimum requirement Small deployments forced to license 72 cores minimum per product
Aug 2024 VMware IT Academy terminated Free educational licensing completely eliminated
Aug 2025 Migration completed Full Proxmox deployment operational; zero VMware dependency

The financial impact of staying on VMware was incompatible with educational IT budgets:

πŸ’Έ VMware (Broadcom)
Annual Cost ~$68,600
5-Year TCO ~$343,000
Model Per-core subscription
Minimum 72-core minimum
ANNUAL SAVINGS ~$68,600
βœ“ Proxmox VE
Annual Cost $0
5-Year TCO $0
Model Open source (AGPLv3)
Minimum None
~$343,000 Five-year cost avoidance (196 cores Γ— $350/core Γ— 5 years) See sources below β†’

The migration involved a 6-node enterprise server cluster running educational lab environments for NETLAB+:

PVE-NETLAB Cluster Operational
6 Nodes
196 CPU Cores
2.75 TB Total RAM
πŸ‘‘
Node 1 Management
πŸ–₯️
Node 2 SR630
πŸ–₯️
Node 3 SR630
πŸ–₯️
Node 4 SR630
πŸ–₯️
Node 5 SR630
πŸ–₯️
Node 6 SR630
10G Management Network Dedicated cluster communication
1G Production Network LACP bonded (4x1G per node)

The migration wasn't just a hypervisor swap. I redesigned the entire network architecture to eliminate bandwidth bottlenecks and segregate management traffic:

πŸ“‘ 10G PCIe NIC Installation

Physically installed dual-port 10 Gigabit PCIe network cards in all six servers. Opened chassis, verified PCIe slots, seated cards, ran cable infrastructure.

πŸ”— LACP Bonding (802.3ad)

Configured link aggregation across all onboard gigabit NICs. Four 1G ports bonded per server for redundancy and load balancing on production traffic.

🏷️ VLAN Segmentation

Implemented VLAN trunking across the infrastructure. Management, production, and lab traffic isolated on separate broadcast domains.

πŸŒ‰ Linux Bridge Configuration

Configured virtual bridges for VM networking. Multiple bridge interfaces with VLAN tagging support for flexible lab topology deployment.

πŸ“‹ Port Documentation

Created comprehensive port mapping documentation. Every cable run, switch port, and interface labeled and cataloged for future troubleshooting.

πŸ’Ύ Shared Storage Pools

Configured multiple shared storage pools accessible across all cluster nodes. Enables live migration and distributed VM placement.

The project spanned several months with distinct phases. Each phase built on the previous; failures meant rebuilding:

1
Assessment & Planning βœ“ Complete
  • Hardware inventory
  • Cost analysis
  • Network design
  • Timeline planning
2
Hardware Installation βœ“ Complete
  • 10G PCIe NIC installation (all 6 servers)
  • Cable infrastructure deployment
  • Storage expansion
3
Proxmox Deployment βœ“ Complete
  • OS installation on all hosts
  • Driver verification
  • Initial configuration
4
Network Configuration βœ“ Complete
  • LACP bonding setup
  • VLAN configuration
  • 10G management network
  • Bridge configuration
5
Cluster Formation βœ“ Complete
  • 6-node cluster creation
  • Quorum establishment
  • Storage pool configuration
6
NETLAB+ Integration βœ“ Complete
  • NDG post-install optimization
  • VM migration
  • Lab pod deployment
  • Student access provisioning

The end goal was full integration with NDG's NETLAB+ platform for educational lab delivery. This involved restoring VM images from NDG's distribution system, configuring the NETLAB-VE management appliance, deploying lab pods (Cisco, Palo Alto, security courses), and provisioning student access. The Proxmox cluster now serves as the backbone for hands-on cybersecurity and networking education.

Building production infrastructure means breaking things. These are the moments that taught me the most:

πŸ’₯ The Proxmox 9 Incident

Upgraded the servers to Proxmox 9 and everything broke. That's when I learned the difference between Debian Trixie (13) and Bookworm (12). NETLAB+'s Proxmox packages were built for Bookworm; Trixie's libraries were incompatible. Lesson: always check upstream dependencies before major version upgrades. Rolled back and pinned to Proxmox 8.

πŸ”„ Corosync Cluster Communication

Corosync is the heartbeat of a Proxmox cluster. Misconfigure it and nodes can't see each other; the cluster splits or loses quorum. Learning ring addresses, multicast vs unicast, and why the dedicated 10G network matters for cluster stability was critical. When Corosync breaks, nothing works.

πŸ”— Bonding vs Etherchannel

Linux bonding (802.3ad/LACP) and Cisco Etherchannel are the same concept but configured differently on each end. Getting the hash policy right, understanding how traffic distributes across bonded links, and debugging why one NIC was saturated while others sat idle taught me more about networking than any textbook.

πŸ•ΈοΈ NIC Routing Across 6 Servers

Each server has multiple NICs: onboard gigabit ports, 10G PCIe cards, management interfaces. Keeping track of which physical port maps to which Linux interface, which bridge, which VLAN, across six servers requires meticulous documentation. One wrong cable and traffic routes to nowhere.

This project was a crash course in enterprise infrastructure. I went from VMware familiarity to deep Proxmox and Linux networking expertise:

Virtualization

Proxmox VE administrationKVM/QEMU hypervisorLXC containersLive migration

Networking

10G infrastructureLACP bonding (802.3ad)VLAN trunkingLinux bridges

Infrastructure

Enterprise server hardwarePCIe NIC installationStorage architectureCluster management

Documentation

Network topology mappingPort documentationRunbook creationChange management
Migration complete and operational
  • βœ“ VMware dependency eliminated
  • βœ“ 6-node Proxmox cluster operational
  • βœ“ 10G management network deployed
  • βœ“ LACP bonding configured on all nodes
  • βœ“ NETLAB+ integration complete
  • βœ“ Lab pods deployed and accessible
  • βœ“ Student provisioning operational
  • βœ“ Documentation finalized