← Projects

Hyper-V to Proxmox Migration

Status: Completed Origin: Polk State College Since: 2026.01
Windows ServerHyper-VProxmox VEActive DirectoryPowerShellQEMULinux

Overview

Leading a critical infrastructure migration to transition production systems from Microsoft Hyper-V to Proxmox VE. This project encompasses Active Directory Domain Controllers and Linux-based network monitoring tools (LibreNMS, Netdisco), requiring different migration strategies based on workload type.

Migration Scope & Strategy

Domain Controllers Leapfrog (DC1 → DC2 → rebuild → swap) Zero V2V, Zero Downtime
Linux VMs VHDX Transfer → QCOW2 Conversion Reconfiguration Required
File Servers New Linux VM + Samba AD + Robocopy Full Rebuild (Intentional)
UniFi Controller Backup .unf → Proxmox LXC → Restore Minimal (config backup)
SCCM → FOG Project Full platform replacement New tooling (intentional)

Critical Warning

⚠️ NEVER V2V Domain Controllers
Risk: USN Rollback
Result: Permanent AD database corruption and replication failure
Solution: Use the Leapfrog method (new VM + replication)

Pre-Migration: Hyper-V Generation Check

Before creating the Proxmox VM shell, identify the source generation to match BIOS settings. Check via: Hyper-V Manager → VM → Summary Tab → "Generation"

Source Proxmox BIOS Machine Type
Gen 1 SeaBIOS i440fx
Gen 2 OVMF (UEFI) q35 (recommended)

Active Directory: The "Leapfrog" Method

With two existing DCs (DC1 and DC2) on Hyper-V, the strategy was straightforward: transfer all FSMO roles to DC2, rebuild DC1 fresh on Proxmox, transfer roles back, then rebuild DC2 on Proxmox. AD replication handles the data sync. No V2V conversion, no downtime, no risk of USN rollback. The whole process was easier than expected because AD replication just works when you let it.

1
Prep DC2 Transfer all FSMO roles to DC2 (existing Hyper-V) Verify DHCP scopes, DNS records, and AD replication are healthy on DC2
2
Validate Confirm DC2 holds all roles and services Check DHCP failover, DNS zones, AD computer objects, repadmin /showrepl
3
Rebuild DC1 Delete old DC1, build fresh Windows Server VM on Proxmox Promote new DC1, let AD replication sync all objects from DC2
4
Swap Roles Transfer all FSMO roles back to new DC1 on Proxmox Verify DHCP scopes, DNS, and replication are clean
5
Rebuild DC2 Delete old DC2 on Hyper-V, build fresh on Proxmox Promote new DC2, AD replicates from DC1. Both DCs now on Proxmox.
PowerShell FSMO Role Transfer
Move-ADDirectoryServerOperationMasterRole -Identity "NEW-DC1" `
  -OperationMasterRole SchemaMaster, DomainNamingMaster, `
  PDCEmulator, RIDMaster, InfrastructureMaster

Standard Linux VMs: V2V Process

For Linux VMs like Netdisco, Switchmap, and LibreNMS, use direct disk conversion.

Phase 1: Create the "Shell" (Proxmox GUI)

Create the VM chassis but do not create a hard drive.

OS Do not use media (Linux 5.x/6.x Kernel)
System Check QEMU Agent. Match BIOS to source generation
Disks DELETE the default 32GB scsi0 drive. List must be empty
CPU/RAM Right-size resources (e.g., 16 cores → 4 cores)
Network Bridge vmbr0, Model VirtIO

Phase 2: Disk Transfer (SCP)

Push the VHDX file from the Hyper-V host to Proxmox.

PowerShell Run on Windows Host
scp "C:\Path\To\Disk.vhdx" root@PROXMOX_IP:/var/lib/vz/dump/

Phase 3: Import & Attach (Proxmox Shell)

Convert the VHDX to QCOW2 and map it to the VM ID.

Bash Proxmox Host
# Syntax: qm importdisk <vmid> <source_path> <storage_id>
qm importdisk 102 /var/lib/vz/dump/Netdisco.vhdx local-lvm
1 Hardware → Double-click Unused Disk 0 → Add as SCSI
2 Options → Boot Order → Enable and prioritize scsi0

Post-Migration Hiccups & Fixes

Loss of Network Connectivity
Symptom: VM boots but cannot ping gateway
Cause: Interface name change (eth0 → ens18)
Fix: Update /etc/netplan/*.yaml with new interface name, then run netplan apply
Missing QEMU Guest Agent
Symptom: Proxmox shows no IP; Shutdown button doesn't work
Cause: Agent not installed on migrated VM
Fix: apt install qemu-guest-agent && systemctl enable --now qemu-guest-agent

Application-Specific Troubleshooting

LibreNMS UI 502 Bad Gateway or Poller Failures
Cause: Permissions broken during transfer
Fix: Run validator as librenms user: sudo su - librenms && ./validate.php
💡 Follow script output explicitly (pip3 install, lnms migrate, chown commands)
Netdisco Services fail to start; DB connection refused
Cause: Multiple configuration issues
Fixes:
  • DB Host: Edit ~/environments/deployment.yml, change IP to localhost
  • Cookie Crash: Add session_cookie_key: 'random_string_1234' to deployment.yml
  • Manual Start: sudo su - netdisco && ~/bin/netdisco-backend start && ~/bin/netdisco-web start

File Server: Samba AD Migration

The legacy environment had multiple Windows 10 machines acting as file servers in a "split-brain" configuration: files scattered across machines with no centralized management, inconsistent permissions, and no proper backup strategy. I built an open-source automation toolkit to solve this.

UniFi Controller: Windows VM to Proxmox LXC

The UniFi controller was running on a Windows 11 VM hosted on Hyper-V. There was no SSH access to the Hyper-V host, so managing the controller meant RDP'ing into the Hyper-V server, then logging into the Windows VM from there. Not exactly efficient.

The migration was clean: export the UniFi backup (.unf file) from the Windows controller, spin up a new LXC container on Proxmox using the official UniFi container template, upload the backup file, and restore. All WAP configurations, SSIDs, client data, and network settings came over intact. WiFi was back up and running on Linux within minutes.

1
Backup from Windows UniFi Controller → Settings → Backup → Download .unf file
2
Create Proxmox LXC Official UniFi container template, bridged networking, minimal resources
3
Upload & Restore Upload .unf backup to new controller, restore. All WAPs re-adopt automatically.
💡 Moving from a full Windows 11 VM to an LXC container cut resource usage dramatically. The UniFi controller doesn't need a whole OS instance. An LXC container is the right tool for the job.

SCCM Replacement: FOG Project

Microsoft SCCM (System Center Configuration Manager) was the existing imaging and deployment solution, but it's heavy, expensive to license, and tightly coupled to Windows infrastructure. As part of the migration away from Microsoft dependencies, I replaced it with FOG Project, an open-source network cloning and management solution.

Before Microsoft SCCM
Windows Server dependency SQL Server required Per-device licensing Complex infrastructure
After FOG Project
Linux-based (Proxmox VM) Built-in database Free and open source PXE boot + web UI

Golden Image Pipeline

Each classroom has different hardware (different Dell models from different building renovations), so I build a separate golden image for each room on a physical machine with matching hardware. You cannot build golden images on VMs and expect drivers and chipset support to carry over to different physical hardware.

Step 1: Install + Debloat Clean Windows 11 install on a physical reference machine matching the target classroom hardware, then run Chris Titus Tech's Windows Utility to strip bloatware (Candy Crush, Spotify, Xbox, telemetry). Handles both installed and staged provisioned Appx packages, which is critical because leftover staged packages cause silent Sysprep failures.
Step 2: Sysprep + Shutdown Run sysprep.exe /generalize /oobe /shutdown /unattend:C:\Windows\Panther\unattend.xml. The unattend handles BypassNRO (Windows 11's forced internet requirement) and automates OOBE after deployment. Machine shuts down. Do not power it back on.
Step 3: FOG Capture Schedule a capture task in FOG, PXE boot the machine. FOG captures the sysprepped image frozen at OOBE. On deployment, unattend.xml automates setup, FOG service agent kicks in, and AD auto-join handles domain membership. No manual touch at the workstation.

Deployment Architecture

The FOG agent runs silently on every workstation, communicating back to the FOG server through a dedicated fog-service Active Directory service account. When a machine needs reimaging, it PXE boots using snponly.efi, which DHCP points to the FOG server.

Per-Classroom Images Separate images for each room (different drivers, hardware configs, software packages)
AD Integration fog-service AD account authenticates the FOG agent on each workstation for background communication
PXE Boot Chain DHCP Option 66/67 points to FOG server, workstations boot snponly.efi for UEFI network boot
Deploy Machine pulls its assigned classroom image, reimages unattended, joins AD automatically

Host Management

Every workstation across all classrooms is registered in FOG via CSV import (hostname + MAC address). Each host is assigned to its classroom group with the correct image. When it's time to reimage a room, select the group, schedule a deploy task, and FOG uses Partclone to push the image over the network. Partclone only writes used blocks, so a 60GB Windows install on a 256GB drive doesn't transfer 256GB of data.

CSV Import Bulk register all workstations: hostname, MAC address, classroom group, assigned image
Group Assignment Each classroom is a FOG group with its own image. Deploy to 30 machines with one click.
Partclone Deploy Block-level imaging that only transfers used sectors. Faster than full-disk cloning and supports NTFS, ext4, and more.
⚠️ If Sysprep fails (usually leftover staged Appx packages), you cannot simply run it again on the same installation. Always debloat thoroughly before running Sysprep, or you'll be rebuilding the reference machine from scratch.

Automated Patch Management: WSUS

With FOG handling imaging and deployment, the next automation target was patching. I set up Windows Server Update Services (WSUS) on DC1 to centralize update distribution across the campus network.

Catalog Pruning Updates limited to Windows 11 and Windows Server 2025 only. No legacy OS bloat, faster syncs, smaller storage footprint.
3:00 AM Nightly Sync + 4-Day Deferral WSUS pulls from Microsoft CDN overnight. A 4-day deferral policy lets the broader ecosystem catch bad patches before they reach our network.
Group Policy Targeting Workstations check in with WSUS automatically, download approved updates, and install during maintenance windows.

Between FOG for imaging and WSUS for patching, the full workstation lifecycle is automated. A new machine gets imaged via PXE, joins the domain, picks up its WSUS policy via Group Policy, and stays patched without manual intervention.

Migration Status

Migration complete
  • Pre-migration analysis complete
  • 4 standalone Proxmox servers provisioned
  • 6-node Proxmox cluster provisioned (NetLab environment)
  • Domain Controllers migrated (DC1 + DC2 leapfrog)
  • DHCP scopes, DNS records, AD objects verified
  • LibreNMS migrated and operational
  • Netdisco migrated and operational
  • Switchmap migrated and operational
  • SOC stack migrated (Wazuh, Cortex, TheHive, MISP)
  • File server: Samba AD toolkit created and deployed
  • File server: Data migrated via Robocopy
  • DFS namespace removed, replaced with GPO drive maps (X:\ faculty, Y:\ students)
  • UniFi controller: Windows VM → Proxmox LXC
  • SCCM replaced with FOG Project
  • Hyper-V hosts decommissioned

Infrastructure Footprint

4 Standalone Proxmox Servers Production workloads: DCs, monitoring, file server, imaging, WiFi, SOC stack
6 Node Proxmox Cluster NetLab environment for hands-on student lab exercises
10 Total Proxmox Hosts All running on open-source infrastructure, $0 in hypervisor licensing

Read the full technical writeup: Hyper-V to Proxmox Migration Guide