NDG Proxmox VE 8 - NETLAB+ Deployment Workshop
Completed an intensive 18-hour workshop conducted by Network Development Group (NDG) covering the complete deployment of Proxmox VE 8 for NETLAB+ educational lab environments. This workshop provided hands-on training for migrating academic institutions from VMware to Proxmox following Broadcom's termination of educational licensing.
Unable to load PDF. Click here to download.
The workshop covered the complete lifecycle of deploying Proxmox VE for educational environments, from hardware planning through production deployment:
- Proxmox VE architecture overview
- Hardware requirements and server specifications
- Storage configuration: NVMe, RAID vs HBA, ZFS considerations
- Ceph clustering (why NDG doesn't use it)
- Network topology and VLAN design
- DNS and FQDN configuration for certificates
- Management server vs User server architecture
- Proxmox clustering and quorum configuration
- High Availability (HA) setup
- Proxmox Backup Server (PBS) integration
- VM deployment strategies
- LXC containers vs Docker containers
- SPICE remote access configuration
- Copy/paste and clipboard integration
- VirtIO drivers and Windows 11 requirements
- NDG post-install script execution
- Deployment scenarios: Fresh Install vs Migration
- VM distribution system and catalog
- Pod deployment and linked clones
- Golden image management
- Custom VM preparation and migration
- Backup strategies and PBS configuration
- Python SDK automation scripts
Critical knowledge gained from instructor presentations and participant discussions:
NDG recommends 15TB NVMe drives on user servers. RAID controllers become bottlenecks; prefer HBA mode. ZFS is not officially supported by NDG due to complexity.
Separate management, cluster, and storage traffic on dedicated networks. The cluster network handles VM migration and replication - needs high bandwidth (10G recommended).
CPU type MUST be set to 'Host' for Windows 11 VMs. VirtIO drivers must be pre-installed or injected via WinRE before first boot.
Proxmox cannot span linked clones across different storage pools. Master and clones must reside on the same storage - hence the large NVMe requirement.
NDG wants all VMs using SPICE for remote access. Enables copy/paste, multi-monitor support, and better performance than VNC.
Run Proxmox Backup Server on older hardware (Dell R720 recommended). Stop VMs before backup for consistency. Use vzdump for snapshot backups.
NDG outlined three deployment paths for institutions migrating to Proxmox:
Complete new deployment. Blow away existing VMware infrastructure, install Proxmox from scratch, download all VMs fresh from NDG distribution.
Preserve physical pod configurations (PDUs, routers, switches). Migrate NETLAB+ database and settings while changing hypervisor layer.
Run both VMware and Proxmox simultaneously during transition. Very limited use case with significant complexity.
NDG's recommended hardware specifications for NETLAB+ on Proxmox:
- Model: Lenovo SR630 V3 (recommended)
- RAM: 256 GB (minimum 128 GB)
- Storage: NVMe for OS, staging pods, NETLAB-VE, PBS
- Network: 10G NICs for cluster communication
- Role: Staging VMs, SDK, backup server
- Model: Lenovo SR630 series
- RAM: 384-768 GB per server
- Storage: 15 TB NVMe (masters + linked clones)
- Network: LACP bonded 1G + dedicated 10G cluster
- Role: Running student lab pods
Set up out-of-band management on a separate, segmented network. Configure BIOS for auto power-on after power failure. Critical for remote server management.
Install and use TMUX to keep shell sessions running if your connection drops. Critical when running long installation scripts or cluster operations.
NEVER use Proxmox's default cloning for NETLAB pods. Use NETLAB's own pod management and linked clone system to maintain proper VM ID pools and configurations.
Name storage directories NETLAB1, NETLAB2, etc. Disable preallocation on all NETLAB storage. Use consistent naming across all cluster nodes.
DNS must be externally resolvable if using LetsEncrypt certificates. Plan your FQDN carefully - it's hard to change later once the cluster is configured.
Configure APC UPS Daemon or NUT (Network UPS Tools) for graceful shutdown during power failures. Essential for protecting VM data integrity.
This workshop wasn't just theory - I immediately applied it to production infrastructure:
- ✓ 18 hours of intensive training completed
- ✓ Certificate of attendance received
- ✓ Hands-on labs in NDG training environment
- ✓ Knowledge applied to production deployment