Why We Left Hyper-V
Broadcom acquired VMware and started charging $350/core/year for VCF licensing. They killed the VMware IT Academy program entirely. The institution moved from vSphere to Hyper-V as a cost-saving measure, but I’d already done a VMware to Proxmox migration on my own infrastructure at that point. That migration opened my eyes to how good Proxmox actually is.
It’s more lightweight. The web UI gives you more granular control than Hyper-V Manager ever did. Snapshots, live migration, ZFS, LXC containers, and full KVM virtualization all in one platform. Completely free. No per-socket licensing, no Windows Server dependency, no CALs. One less thing Microsoft gets to hold over your budget.
Hyper-V felt heavy by comparison. Limited Linux VM support, clunky management (RDP into the host just to touch anything), and tight coupling to Windows Server licensing. Once I’d seen what Proxmox could do, going back to Hyper-V felt like a downgrade.
The question was never “should we migrate?” It was “how do we migrate production Active Directory, network monitoring, file servers, and imaging infrastructure without breaking anything?”
The Power of Root on a Proxmox Host
One thing that surprised me coming from Hyper-V: you have full root access to the Proxmox host. It’s just Debian under the hood. You can SSH in, run any Linux command, script anything, automate everything. Hyper-V locks you into PowerShell remoting or RDP. Proxmox gives you a real shell on a real Linux system.
Need to resize a disk? One command. Snapshot a VM? One command. Migrate a VM between hosts? One command. Everything in the web UI is also available from the CLI through qm (VM management), pct (container management), pvesm (storage), and pvecm (cluster). You can script your entire infrastructure.
But the real game changer is the Proxmox VE Helper Scripts community project. These are one-liner bash scripts that spin up fully configured LXC containers or VMs for common services. Need a Pi-hole? One command. Docker host? One command. Home Assistant, Nginx Proxy Manager, Plex, Grafana, Wireguard? One command each.
# Example: spin up a Docker LXC in seconds
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/docker.sh)"
The script handles everything: downloads the template, creates the container, configures networking, installs the service, and starts it. What would take 30 minutes of manual setup takes 60 seconds. I used these for several of our auxiliary services and they just work.
Compare that to Hyper-V where deploying a new service means: create a VM, install Windows or manually download an ISO, walk through the installer, configure networking, install the actual application. The gap in operational speed is enormous.
The Domain Controller Leapfrog
This was the part that scared me most. Domain controllers are the heartbeat of a Windows network. Every authentication, every group policy, every DNS lookup flows through them. Get this wrong and the whole campus goes dark.
The conventional wisdom is clear: never V2V a domain controller. Converting a DC’s virtual disk risks USN rollback, which permanently corrupts the AD replication database. There’s no recovery path short of rebuilding the entire domain.
Instead, I used what I call the “leapfrog” method. We had two DCs: DC1 and DC2, both on Hyper-V.
Step 1: Transfer all five FSMO roles to DC2. Verify DHCP scopes, DNS zones, and AD replication are healthy. DC2 is now running the show.
Step 2: Delete DC1. Build a fresh Windows Server VM on Proxmox. Promote it to domain controller. AD replication syncs everything from DC2 automatically.
Step 3: Transfer all FSMO roles to the new DC1 on Proxmox. Verify everything.
Step 4: Delete DC2 on Hyper-V. Build fresh on Proxmox. Promote. AD replicates from DC1.
Both domain controllers are now on Proxmox. Zero downtime. Zero data loss. The whole process was honestly easier than I expected because AD replication just works when you let it do its job.
The PowerShell for the FSMO transfer is one command:
Move-ADDirectoryServerOperationMasterRole -Identity "NEW-DC1" `
-OperationMasterRole SchemaMaster, DomainNamingMaster, `
PDCEmulator, RIDMaster, InfrastructureMaster
Always verify with repadmin /showrepl after each promotion and transfer. If replication shows errors, stop and fix them before proceeding.
Linux VM Migration: The V2V Process
For Linux VMs (LibreNMS, Netdisco, Switchmap), I used direct disk conversion. The process:
-
Create a “shell” VM in Proxmox. Set the OS type, match the BIOS to the source Hyper-V generation (Gen 1 = SeaBIOS, Gen 2 = OVMF UEFI), but do not create a hard drive. The disk list should be empty.
-
SCP the VHDX from the Hyper-V host to Proxmox:
scp "C:\Path\To\Disk.vhdx" root@PROXMOX_IP:/var/lib/vz/dump/
- Import and attach on the Proxmox side:
qm importdisk 102 /var/lib/vz/dump/Netdisco.vhdx local-lvm
Then in the GUI: Hardware > double-click Unused Disk 0 > add as SCSI. Set boot order to prioritize scsi0.
Post-migration gotchas:
- Network interface names change (eth0 becomes ens18). Update your netplan config.
- Install
qemu-guest-agentso Proxmox can see the VM’s IP and gracefully shut it down. - LibreNMS needed a full permissions reset. Run
validate.phpas the librenms user and follow every instruction it gives you. - Netdisco needed its database host changed to localhost in
deployment.ymland a session cookie key added to prevent crashes.
Killing DFS, Simplifying Drive Maps
The old environment used a DFS namespace to abstract file server paths. For a single-server environment, DFS adds complexity that provides no benefit: 30-minute referral TTL, client cache issues, and another layer to troubleshoot when users can’t access files.
I ripped it out and replaced it with Group Policy Preferences drive mappings using item-level targeting:
- *X:* mapped for faculty and staff, pointing to the full file server
- *Y:* mapped for students, pointing to the student folders only
Security group membership determines which mapping a user gets. No login scripts, no DFS, no namespace caching. If a user is in the Faculty-Staff group, they get X:. If they’re in the Students group, they get Y:. Simple.
UniFi Controller: Windows VM to LXC Container
This one was almost comical. The UniFi controller was running on a Windows 11 VM inside Hyper-V. To manage the WiFi, you had to RDP into the Hyper-V host, then log into the Windows VM from there. No SSH. No remote management. Just nested RDP sessions.
The migration:
- Export the UniFi backup (.unf file) from the Windows controller
- Create an LXC container on Proxmox using the official UniFi template
- Upload the .unf backup and restore
All WAP configurations, SSIDs, and client data came over intact. WiFi was back up in minutes. And now it runs in a lightweight container instead of a full Windows 11 VM. The resource savings alone made it worthwhile.
Replacing SCCM with FOG Project
Microsoft SCCM is powerful but absurdly heavy for an educational lab environment. It needs Windows Server, SQL Server, per-device licensing, and significant infrastructure just to image workstations.
FOG Project does everything we actually need: PXE boot imaging, hardware inventory, and centralized workstation management. It runs on Linux, costs nothing, and the web UI is straightforward.
The Golden Image Pipeline
I build golden images as Proxmox VMs (not on physical hardware) so I can snapshot before Sysprep. This is critical because if Sysprep fails, you cannot simply run it again. The only recovery is reverting to a snapshot.
Step 1: Install and debloat. Set up a clean Windows 11 installation on a reference machine. Run Chris Titus Tech’s Windows Utility to strip all the bloatware (Candy Crush, Spotify, Xbox, etc.) and disable telemetry. This handles both installed and provisioned packages, which is important because leftover staged Appx packages are the number one cause of silent Sysprep failures.
Step 2: Sysprep and shutdown. Once the machine is configured how you want it, run sysprep.exe /generalize /oobe /shutdown /unattend:C:\Windows\Panther\unattend.xml. The unattend file handles BypassNRO (Windows 11’s forced internet requirement) and automates the OOBE setup after deployment. The machine shuts down after Sysprep completes. Do not power it back on.
Step 3: FOG capture. Schedule a capture task in the FOG web UI for that machine, then PXE boot it. FOG captures the sysprepped image as-is, sitting at OOBE. When the image gets deployed to a workstation later, the unattend.xml automates the OOBE setup, the FOG service agent kicks in for background management, and AD auto-join handles domain membership. No manual touch required.
Per-Classroom Deployment
Each classroom has different hardware, so I maintain separate images per room. Every workstation is registered in FOG via CSV import (hostname + MAC address), grouped by classroom. When a room needs reimaging, I select the group, schedule a deploy task, and FOG uses Partclone to push the image. Partclone only writes used blocks, so imaging is fast even on large drives.
The FOG agent runs on every workstation with a dedicated fog-service Active Directory service account. DHCP points PXE boot to the FOG server using snponly.efi for UEFI network boot. A machine needing reimaging just needs to PXE boot and everything happens automatically.
What I’d Do Differently
Document interface names before migration. Every Linux VM had a different post-migration network issue because the interface name changed. A quick ip link show before the migration would have saved debugging time.
Test Sysprep on a throwaway VM first. My first Sysprep attempt failed because of a leftover Xbox app. Always run through the full golden image pipeline once as a dry run before committing to your production image.
The SOC Stack
I also migrated the full security operations stack: Wazuh for endpoint detection and SIEM, Cortex for automated analysis, TheHive for case management, and MISP for threat intelligence sharing. Same V2V process as the other Linux VMs. These were already running on Linux, so it was disk conversion, interface rename, guest agent install, and verify services. Nothing special, but worth mentioning because people forget about their security tooling when planning hypervisor migrations.
The Final Tally
When everything was done, the infrastructure footprint looked like this:
- 4 standalone Proxmox servers running production workloads: domain controllers, network monitoring (LibreNMS, Netdisco, Switchmap), Samba AD file server, FOG imaging, UniFi controller, and the SOC stack (Wazuh, Cortex, TheHive, MISP)
- 6-node Proxmox cluster for the NetLab environment, where students run hands-on lab exercises
- 10 total Proxmox hosts, all on open-source infrastructure
Total hypervisor licensing cost: $0.
The migration took planning and careful execution, but none of it was technically complex. The hardest part was convincing myself that AD replication would actually work as advertised. It did.
For the full project breakdown with architecture details and status, see the Hyper-V to Proxmox Migration project page.