VMware to Proxmox Migration
Led the complete infrastructure migration of our educational lab environment from VMware ESXi to Proxmox VE. This wasn't a team effort with delegated tasks; I was the workhorse. I physically installed hardware, configured every network port, built the cluster, broke it, rebuilt it, documented everything, and delivered a production-ready platform. The project eliminated six figures in annual licensing costs while modernizing the entire network architecture.
In November 2023, Broadcom completed its $61 billion acquisition of VMware and immediately began dismantling the licensing models that educational institutions depended on:
The financial impact of staying on VMware was incompatible with educational IT budgets:
The migration involved a 6-node enterprise server cluster running educational lab environments for NETLAB+:
The migration wasn't just a hypervisor swap. I redesigned the entire network architecture to eliminate bandwidth bottlenecks and segregate management traffic:
Physically installed dual-port 10 Gigabit PCIe network cards in all six servers. Opened chassis, verified PCIe slots, seated cards, ran cable infrastructure.
Configured link aggregation across all onboard gigabit NICs. Four 1G ports bonded per server for redundancy and load balancing on production traffic.
Implemented VLAN trunking across the infrastructure. Management, production, and lab traffic isolated on separate broadcast domains.
Configured virtual bridges for VM networking. Multiple bridge interfaces with VLAN tagging support for flexible lab topology deployment.
Created comprehensive port mapping documentation. Every cable run, switch port, and interface labeled and cataloged for future troubleshooting.
Configured multiple shared storage pools accessible across all cluster nodes. Enables live migration and distributed VM placement.
The project spanned several months with distinct phases. Each phase built on the previous; failures meant rebuilding:
- Hardware inventory
- Cost analysis
- Network design
- Timeline planning
- 10G PCIe NIC installation (all 6 servers)
- Cable infrastructure deployment
- Storage expansion
- OS installation on all hosts
- Driver verification
- Initial configuration
- LACP bonding setup
- VLAN configuration
- 10G management network
- Bridge configuration
- 6-node cluster creation
- Quorum establishment
- Storage pool configuration
- NDG post-install optimization
- VM migration
- Lab pod deployment
- Student access provisioning
The end goal was full integration with NDG's NETLAB+ platform for educational lab delivery. This involved restoring VM images from NDG's distribution system, configuring the NETLAB-VE management appliance, deploying lab pods (Cisco, Palo Alto, security courses), and provisioning student access. The Proxmox cluster now serves as the backbone for hands-on cybersecurity and networking education.
Building production infrastructure means breaking things. These are the moments that taught me the most:
Upgraded the servers to Proxmox 9 and everything broke. That's when I learned the difference between Debian Trixie (13) and Bookworm (12). NETLAB+'s Proxmox packages were built for Bookworm; Trixie's libraries were incompatible. Lesson: always check upstream dependencies before major version upgrades. Rolled back and pinned to Proxmox 8.
Corosync is the heartbeat of a Proxmox cluster. Misconfigure it and nodes can't see each other; the cluster splits or loses quorum. Learning ring addresses, multicast vs unicast, and why the dedicated 10G network matters for cluster stability was critical. When Corosync breaks, nothing works.
Linux bonding (802.3ad/LACP) and Cisco Etherchannel are the same concept but configured differently on each end. Getting the hash policy right, understanding how traffic distributes across bonded links, and debugging why one NIC was saturated while others sat idle taught me more about networking than any textbook.
Each server has multiple NICs: onboard gigabit ports, 10G PCIe cards, management interfaces. Keeping track of which physical port maps to which Linux interface, which bridge, which VLAN, across six servers requires meticulous documentation. One wrong cable and traffic routes to nowhere.
This project was a crash course in enterprise infrastructure. I went from VMware familiarity to deep Proxmox and Linux networking expertise:
Virtualization
Networking
Infrastructure
Documentation
- β VMware dependency eliminated
- β 6-node Proxmox cluster operational
- β 10G management network deployed
- β LACP bonding configured on all nodes
- β NETLAB+ integration complete
- β Lab pods deployed and accessible
- β Student provisioning operational
- β Documentation finalized
// Sources & References
The cost estimates on this page are based on publicly documented Broadcom VMware pricing (196 cores Γ $350/core/year = $68,600 annually). The following sources document VMware's licensing changes under Broadcom: