Migrating VMs from VMware to Proxmox

This chapter walks through the full process: exporting VMs from VMware/ESXi, converting them, importing them into Proxmox, and making them boot correctly.

Chapter 6 covers the conceptual differences. This chapter is the hands-on procedure.


Overview: three migration paths

VMware → Proxmox Migration Paths VMware ESXi running VM OVA/OVF export qemu-img convert VMDK → qcow2/raw qm import Proxmox VM converted disk Path A — Manual Full control, any ESXi version VMware ESXi running VM virt-v2v direct pull Proxmox VM auto-imported Path B — virt-v2v Automated, handles drivers SAN LUN VMFS on LUN reformat or reuse LUN LVM on SAN Proxmox storage Path C — SAN reuse No data copy, fastest cutover Use Path A when: you have OVA exports or ESXi access is limited Use Path B when: you want automation and driver fixes applied automatically Use Path C when: VMs are on SAN and you're re-presenting LUNs to Proxmox

Pre-migration checklist

Before you touch anything:

  • Document every VM: name, OS, CPU, RAM, disk size, NIC count, IP addresses
  • Check VMware Tools version — if missing, install before migration
  • Note any RDM disks — these need special handling (see Path C)
  • Verify Proxmox storage has enough free space (plan for 1.2× the used disk space)
  • Check Proxmox node can reach the VMware ESXi host or vCenter (for virt-v2v)
  • Confirm OS is supported: most Linux distros and Windows Server 2012+ work fine
  • For Windows: download the VirtIO driver ISO in advance

Path A — Manual: OVA export + qemu-img convert

Step 1 — Export the VM from VMware

From vCenter GUI: File → Export → Export OVF Template → check “Include image files in OVF package (OVA)”

From ESXi CLI (faster for large VMs):

# SSH to ESXi host
ovftool vi://user:password@esxi-host/vm-name /tmp/myvm.ova

You’ll get a .ova file — this is a tar archive containing .ovf (config XML) and .vmdk (disk image).

Step 2 — Extract the VMDK

# Unpack the OVA
tar xf myvm.ova

# You'll see files like:
# myvm.ovf
# myvm-disk1.vmdk
# myvm-disk1-flat.vmdk   (the actual data, if thick)
ls -lh *.vmdk

Step 3 — Convert VMDK to qcow2 or raw

# Convert to qcow2 (recommended — portable, supports snapshots)
qemu-img convert -f vmdk -O qcow2 myvm-disk1.vmdk myvm-disk1.qcow2

# Or convert to raw (faster for LVM-thin storage)
qemu-img convert -f vmdk -O raw myvm-disk1.vmdk myvm-disk1.raw

# Check the result
qemu-img info myvm-disk1.qcow2

For large disks, add -p to see progress: qemu-img convert -p -f vmdk -O qcow2 ...

Step 4 — Create a blank VM in Proxmox

# Create VM with no disk (we'll attach it manually)
qm create 200 --name myvm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0

Or use the GUI: Datacenter → node → Create VM — skip the disk step.

Step 5 — Import the disk

# Import qcow2 into local-lvm storage
qm importdisk 200 myvm-disk1.qcow2 local-lvm

# Import raw disk
qm importdisk 200 myvm-disk1.raw local-lvm

# The command prints the new disk ID, e.g.: vm-200-disk-0

Step 6 — Attach the imported disk to the VM

# Attach as SCSI disk (recommended for Linux)
qm set 200 --scsi0 local-lvm:vm-200-disk-0

# Set boot order
qm set 200 --boot order=scsi0

# Add a CD drive (needed for VirtIO drivers on Windows)
qm set 200 --ide2 none,media=cdrom

Step 7 — Attach VirtIO drivers for Windows VMs

VMware uses VMXNET3 and PVSCSI drivers. Proxmox uses VirtIO. Windows won’t boot without drivers.

# Download VirtIO ISO on the Proxmox node
wget -O /var/lib/vz/template/iso/virtio-win.iso \
  https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

# Attach it to the VM
qm set 200 --ide0 local:iso/virtio-win.iso,media=cdrom

Boot the VM, install drivers from the ISO, then shut it down and switch to VirtIO NICs:

qm set 200 --net0 virtio,bridge=vmbr0

Step 8 — Import OVF directly (alternative)

If you have the .ovf + .vmdk separately, qm importovf handles everything:

qm importovf 201 myvm.ovf local-lvm

This creates the VM, sets CPU/RAM from the OVF, and imports all disks in one step.


Path B — Automated: virt-v2v

virt-v2v is a tool that connects to ESXi, copies the VM, converts drivers, and imports directly into Proxmox. It handles Windows driver injection automatically.

Install virt-v2v on the Proxmox node

apt install virt-v2v nbdkit

Migrate a Linux VM directly

virt-v2v \
  -ic vpx://administrator@vcenter.example.com/Datacenter/esxi-host \
  -it vddk \
  --vddk-libdir /opt/vmware-vix-disklib/ \
  -o local -os /var/lib/vz/images/100 \
  "MyLinuxVM"

For simpler setups without VDDK (slower but no VMware SDK needed):

virt-v2v \
  -ic esx://root@esxi-host/?no_verify=1 \
  -o local -os /var/lib/vz/images/100 \
  "MyLinuxVM"

What virt-v2v does automatically

  • Copies all VM disks over the network
  • Converts VMDK to raw/qcow2
  • Removes VMware Tools
  • Installs VirtIO drivers (for Windows: injects drivers into the disk offline)
  • Reconfigures GRUB for KVM (if needed)
  • Outputs a ready-to-boot libvirt XML or local disk

Import the output into Proxmox

After virt-v2v finishes, import the disk and create the VM:

# Find the output disk
ls /var/lib/vz/images/100/

# Create VM and import
qm create 100 --name mylinuxvm --memory 4096 --cores 4
qm importdisk 100 /var/lib/vz/images/100/MyLinuxVM-sda /var/lib/vz/images/100/
qm set 100 --scsi0 local:100/vm-100-disk-0.raw --boot order=scsi0

Path C — SAN LUN reuse

If your VMs live on SAN LUNs, you may be able to re-present the same LUN to Proxmox — completely skipping the data copy.

Never present a VMFS-formatted LUN to Proxmox without wiping it first. Proxmox cannot read VMFS. You must migrate the VM data off first (Path A or B), then reformat the LUN for Proxmox.

Strategy for SAN reuse

1. Use Path A or B to copy VM data off the LUN to a temporary location
2. Shut down all VMware VMs using the LUN
3. Remove the LUN from VMware (unmask / unzone it from ESXi)
4. Present the LUN to Proxmox (mask / zone it to Proxmox nodes)
5. Wipe VMFS: wipefs -a /dev/mapper/mpathX
6. Create LVM: pvcreate → vgcreate → add as Proxmox storage
7. Import VMs onto the now-clean LUN

This approach gives you maximum storage reuse. The downtime window is only steps 2–7 — typically under an hour for a pre-planned cutover.


Post-migration: making VMs boot correctly

Linux VMs

Most Linux VMs just work. If the VM boots but can’t find its root filesystem:

# Boot into rescue mode, then:
# 1. Update /etc/fstab if it referenced VMware device names
blkid                          # get new UUIDs
vi /etc/fstab                  # update any /dev/sdX entries to UUID=...

# 2. Rebuild initramfs with VirtIO modules
update-initramfs -u -k all     # Debian/Ubuntu
dracut --force                 # RHEL/Rocky

# 3. Check GRUB references the right device
grep -r 'root=' /boot/grub*

Windows VMs

The most common issue: Windows BSODs on first boot because the SCSI driver changed.

Fix — inject VirtIO drivers before booting:

If using virt-v2v, this is automatic. For manual migrations:

  1. Attach the VirtIO ISO to the VM in Proxmox
  2. Boot the VM — it will BSOD
  3. Boot from Windows ISO in repair mode
  4. Open command prompt: drvload D:\vioscsi\2k19\amd64\vioscsi.inf
  5. Reboot

Or — easier — before migrating, change the SCSI controller to IDE in VMware, migrate, boot in Proxmox, install VirtIO drivers, then switch back to VirtIO SCSI.

Network driver changes

VMware NIC Proxmox equivalent Notes
VMXNET3 virtio Better performance on Proxmox
E1000 e1000 Works but slower; change to virtio after migration
PVSCSI virtio-scsi Best performance on Proxmox

After migration, update NIC in Proxmox:

# Change to virtio NIC
qm set 100 --net0 virtio,bridge=vmbr0,macaddr=XX:XX:XX:XX:XX:XX

Keep the same MAC address if the guest OS has IP/license tied to it.


Removing VMware Tools from Linux guests

# Check if installed
vmware-toolsd --version 2>/dev/null || echo "not running"

# Debian/Ubuntu
apt remove open-vm-tools open-vm-tools-desktop

# RHEL/Rocky
dnf remove open-vm-tools

# If installed from tar (legacy):
/usr/bin/vmware-uninstall-tools.pl

Then install QEMU guest agent:

apt install qemu-guest-agent
systemctl enable --now qemu-guest-agent

Enable in Proxmox:

qm set 100 --agent 1

Migrating SAN-attached VMs (RDM → disk passthrough)

If a VMware VM uses RDM to access a LUN directly:

  1. Identify the LUN WWN from VMware storage adapter view
  2. Re-zone/mask the LUN from ESXi to Proxmox node (coordinate with SAN admin)
  3. Verify Proxmox sees it:
    multipath -ll   # find the mpath device
    ls /dev/disk/by-id/ | grep wwn
    
  4. Attach to VM:
    qm set 200 --scsi1 /dev/disk/by-id/wwn-0x5000c50015ea71aa,cache=none
    

No data conversion needed — the guest OS reads the same raw blocks.


Migration tracking table

Useful format for planning a large migration:

VM Name OS Size Path Exported Converted Imported Tested DNS updated
web01 Ubuntu 22.04 80G A
db01 RHEL 9 500G B auto - -
sql01 Win Server 2019 200G A - - - - -

Quick command reference

# Export OVA from ESXi
ovftool vi://user:pass@host/vmname /tmp/export.ova

# Unpack OVA
tar xf export.ova

# Convert VMDK to qcow2
qemu-img convert -p -f vmdk -O qcow2 disk.vmdk disk.qcow2

# Convert VMDK to raw
qemu-img convert -p -f vmdk -O raw disk.vmdk disk.raw

# Create blank VM
qm create 200 --name myvm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0

# Import disk to storage
qm importdisk 200 disk.qcow2 local-lvm

# Attach imported disk
qm set 200 --scsi0 local-lvm:vm-200-disk-0 --boot order=scsi0

# Import entire OVF
qm importovf 201 myvm.ovf local-lvm

# List all VMs
qm list

# Start VM
qm start 200

# Check boot logs
qm terminal 200

Summary

Task Command / tool
Export from VMware ovftool or vCenter GUI
Convert disk format qemu-img convert -f vmdk -O qcow2
Create Proxmox VM qm create or GUI
Import disk qm importdisk
Import full OVF qm importovf
Automated migration virt-v2v
Fix Linux boot Update /etc/fstab UUIDs, rebuild initramfs
Fix Windows boot VirtIO drivers from ISO
Remove VMware Tools apt remove open-vm-tools
Install guest agent apt install qemu-guest-agent
SAN LUN passthrough qm set --scsi1 /dev/disk/by-id/wwn-...