Hyper-V to Proxmox Migration
Overview
Leading a critical infrastructure migration to transition production systems from Microsoft Hyper-V to Proxmox VE. This project encompasses Active Directory Domain Controllers and Linux-based network monitoring tools (LibreNMS, Netdisco), requiring different migration strategies based on workload type.
Migration Scope & Strategy
Critical Warning
Pre-Migration: Hyper-V Generation Check
Before creating the Proxmox VM shell, identify the source generation to match BIOS settings.
Check via: Hyper-V Manager → VM → Summary Tab → "Generation"
Active Directory: The "Leapfrog" Method
With two existing DCs (DC1 and DC2) on Hyper-V, the strategy was straightforward: transfer all FSMO roles to DC2, rebuild DC1 fresh on Proxmox, transfer roles back, then rebuild DC2 on Proxmox. AD replication handles the data sync. No V2V conversion, no downtime, no risk of USN rollback. The whole process was easier than expected because AD replication just works when you let it.
Move-ADDirectoryServerOperationMasterRole -Identity "NEW-DC1" `
-OperationMasterRole SchemaMaster, DomainNamingMaster, `
PDCEmulator, RIDMaster, InfrastructureMaster Standard Linux VMs: V2V Process
For Linux VMs like Netdisco, Switchmap, and LibreNMS, use direct disk conversion.
Phase 1: Create the "Shell" (Proxmox GUI)
Create the VM chassis but do not create a hard drive.
Phase 2: Disk Transfer (SCP)
Push the VHDX file from the Hyper-V host to Proxmox.
scp "C:\Path\To\Disk.vhdx" root@PROXMOX_IP:/var/lib/vz/dump/ Phase 3: Import & Attach (Proxmox Shell)
Convert the VHDX to QCOW2 and map it to the VM ID.
# Syntax: qm importdisk <vmid> <source_path> <storage_id>
qm importdisk 102 /var/lib/vz/dump/Netdisco.vhdx local-lvm Post-Migration Hiccups & Fixes
Application-Specific Troubleshooting
- DB Host: Edit ~/environments/deployment.yml, change IP to localhost
- Cookie Crash: Add session_cookie_key: 'random_string_1234' to deployment.yml
- Manual Start: sudo su - netdisco && ~/bin/netdisco-backend start && ~/bin/netdisco-web start
File Server: Samba AD Migration
The legacy environment had multiple Windows 10 machines acting as file servers in a "split-brain" configuration: files scattered across machines with no centralized management, inconsistent permissions, and no proper backup strategy. I built an open-source automation toolkit to solve this.
UniFi Controller: Windows VM to Proxmox LXC
The UniFi controller was running on a Windows 11 VM hosted on Hyper-V. There was no SSH access to the Hyper-V host, so managing the controller meant RDP'ing into the Hyper-V server, then logging into the Windows VM from there. Not exactly efficient.
The migration was clean: export the UniFi backup (.unf file) from the Windows controller, spin up a new LXC container on Proxmox using the official UniFi container template, upload the backup file, and restore. All WAP configurations, SSIDs, client data, and network settings came over intact. WiFi was back up and running on Linux within minutes.
SCCM Replacement: FOG Project
Microsoft SCCM (System Center Configuration Manager) was the existing imaging and deployment solution, but it's heavy, expensive to license, and tightly coupled to Windows infrastructure. As part of the migration away from Microsoft dependencies, I replaced it with FOG Project, an open-source network cloning and management solution.
Golden Image Pipeline
Each classroom has different hardware (different Dell models from different building renovations), so I build a separate golden image for each room on a physical machine with matching hardware. You cannot build golden images on VMs and expect drivers and chipset support to carry over to different physical hardware.
sysprep.exe /generalize /oobe /shutdown /unattend:C:\Windows\Panther\unattend.xml.
The unattend handles BypassNRO (Windows 11's forced internet requirement) and automates OOBE after
deployment. Machine shuts down. Do not power it back on.
Deployment Architecture
The FOG agent runs silently on every workstation, communicating back to the FOG server through
a dedicated fog-service Active Directory service account. When a machine needs
reimaging, it PXE boots using snponly.efi, which DHCP points to the FOG server.
fog-service AD account authenticates the FOG agent on each workstation for background communication snponly.efi for UEFI network boot Host Management
Every workstation across all classrooms is registered in FOG via CSV import (hostname + MAC address). Each host is assigned to its classroom group with the correct image. When it's time to reimage a room, select the group, schedule a deploy task, and FOG uses Partclone to push the image over the network. Partclone only writes used blocks, so a 60GB Windows install on a 256GB drive doesn't transfer 256GB of data.
Automated Patch Management: WSUS
With FOG handling imaging and deployment, the next automation target was patching. I set up Windows Server Update Services (WSUS) on DC1 to centralize update distribution across the campus network.
Between FOG for imaging and WSUS for patching, the full workstation lifecycle is automated. A new machine gets imaged via PXE, joins the domain, picks up its WSUS policy via Group Policy, and stays patched without manual intervention.
Migration Status
- ✓ Pre-migration analysis complete
- ✓ 4 standalone Proxmox servers provisioned
- ✓ 6-node Proxmox cluster provisioned (NetLab environment)
- ✓ Domain Controllers migrated (DC1 + DC2 leapfrog)
- ✓ DHCP scopes, DNS records, AD objects verified
- ✓ LibreNMS migrated and operational
- ✓ Netdisco migrated and operational
- ✓ Switchmap migrated and operational
- ✓ SOC stack migrated (Wazuh, Cortex, TheHive, MISP)
- ✓ File server: Samba AD toolkit created and deployed
- ✓ File server: Data migrated via Robocopy
- ✓ DFS namespace removed, replaced with GPO drive maps (X:\ faculty, Y:\ students)
- ✓ UniFi controller: Windows VM → Proxmox LXC
- ✓ SCCM replaced with FOG Project
- ✓ Hyper-V hosts decommissioned
Infrastructure Footprint
Read the full technical writeup: Hyper-V to Proxmox Migration Guide