VMware vs Proxmox: Storage Concepts

If you’ve managed VMware/ESXi and you’re moving to Proxmox, everything works — but it has a different name and a different shape. This chapter maps every VMware storage concept to its Proxmox equivalent.

This chapter is about understanding the conceptual differences. Chapter 7 covers the actual migration steps.


The core difference in philosophy

VMware bundles storage management into vSphere: VMFS, datastores, and RDM are proprietary formats tightly coupled to ESXi.

Proxmox uses the Linux storage stack directly: LVM, ZFS, Ceph, NFS, iSCSI — all standard tools you can use from the command line without Proxmox at all.


Side-by-side comparison

VMware / ESXi Proxmox VE Datastore Named storage bucket — VMs stored here e.g. "datastore1" on VMFS or NFS Storage Configured in /etc/pve/storage.cfg e.g. "local-lvm" "ceph-pool" "nfs-backup" VMFS VMware's cluster-aware block filesystem Proprietary, ESXi-only, on raw block devices LVM-thin / ZFS / ext4 / xfs Standard Linux filesystems Open, portable, works on any Linux VMDK Virtual Machine Disk format Can be thick (pre-alloc) or thin (lazy) qcow2 / raw qcow2 = thin, snapshots, portable raw = thick/fast, no overhead RDM (Raw Device Mapping) Pass a SAN LUN directly to a VM VM sees raw block device, not a VMDK Disk Passthrough qm set <vmid> --scsi1 /dev/disk/by-id/... VM gets raw block device access vSAN Distributed storage from local disks Built into vSphere, proprietary Ceph Distributed storage, fully integrated Open source, built into Proxmox GUI Storage vMotion Move VM disk to different datastore live Requires vMotion license qm move_disk qm move_disk <vmid> scsi0 <storage> Built-in, works online with QEMU snapshots VMware Tools Guest agent for heartbeat, freeze, snapshots Closed source, vmware-specific package QEMU Guest Agent apt install qemu-guest-agent Open source, package in every distro

Datastores → Proxmox Storage

In VMware, a datastore is where VM disks live. Proxmox uses the same concept but calls it storage, and it’s configured in /etc/pve/storage.cfg.

VMware datastore types and their Proxmox equivalents

VMware Proxmox Notes
VMFS on local disk dir or lvm-thin Local block storage
VMFS on SAN/iSCSI lvm or lvmthin over iSCSI Block storage from SAN
NFS datastore nfs Identical concept, same protocol
vSAN datastore cephfs or rbd (Ceph) Distributed, cluster-wide
vVol No direct equivalent Use LVM-thin or Ceph instead

Checking Proxmox storage config

cat /etc/pve/storage.cfg
pvesm status          # Live status of all storages
pvesm list local-lvm  # List contents of a storage

VMFS → LVM-thin (the most common switch)

VMFS is a cluster filesystem that lets multiple ESXi hosts share the same raw block device. LVM-thin on Proxmox is the equivalent for a single node. For cluster-shared block storage, use Ceph.

What VMFS gave you

  • Multiple VMs stored on one block device
  • VM files named <name>.vmdk, <name>-flat.vmdk
  • Thin-provisioned disks that grow as data is written
  • Snapshot capability

What LVM-thin gives you

  • Multiple VMs stored in one LVM thin pool
  • VM disks are LVM thin LVs (e.g., vm-100-disk-0)
  • Thin-provisioned — only used blocks consume space
  • Snapshot capability via LVM
# Check thin pool usage on Proxmox
lvs -o lv_name,pool_lv,data_percent,metadata_percent vg_name

VMDK → qcow2 and raw

VMware stores VM disks as .vmdk files. Proxmox uses two formats:

Format VMware equivalent When to use
raw Thick-provisioned VMDK Maximum performance, no overhead
qcow2 Thin-provisioned VMDK Flexible, supports snapshots, portable

When a VM disk lives in an LVM-thin storage, even a raw format disk gets thin-provisioning benefits from LVM — you get raw performance AND thin allocation.


RDM → Disk Passthrough

In VMware, Raw Device Mapping (RDM) gives a VM direct access to a SAN LUN, bypassing VMFS. This is used for clustered applications (Oracle RAC, Windows Server Failover Clustering) that need to see raw block devices.

In Proxmox, this is disk passthrough — you pass the block device directly to the VM config:

# Find the stable device ID (always use by-id, not /dev/sdX)
ls -l /dev/disk/by-id/ | grep wwn

# Add the raw disk to VM 200
qm set 200 --scsi1 /dev/disk/by-id/wwn-0x5000c50015ea71aa,cache=none

# Verify
qm config 200

Always use /dev/disk/by-id/ paths, not /dev/sdb. Device names like sdb change between reboots; WWN-based IDs are stable.


vSAN → Ceph

Both are software-defined distributed storage that pools local disks across cluster nodes. Ceph is the open-source equivalent of vSAN.

Feature vSAN Ceph on Proxmox
Minimum nodes 3 3
Data protection FTT (Failure to Tolerate) policy Replication factor (usually 3)
Access vSAN datastore rbd (block) or cephfs (filesystem)
Config location vCenter GUI Proxmox GUI → Ceph section
CLI esxcli vsan ceph status, ceph osd tree
# Check Ceph cluster health from any Proxmox node
ceph status
ceph osd tree
ceph df

Storage vMotion → qm move_disk

Storage vMotion moves a running VM’s disk from one datastore to another without downtime. Proxmox has qm move_disk:

# Move VM 100's scsi0 disk to ceph-pool storage
qm move_disk 100 scsi0 ceph-pool

# Move and delete the old copy once confirmed
qm move_disk 100 scsi0 ceph-pool --delete 1

The VM stays running. For Windows VMs, QEMU freezes the filesystem briefly (via guest agent) to ensure consistency.


Thick vs Thin provisioning

VMware distinguishes between Thick (Eager Zeroed), Thick (Lazy Zeroed), and Thin. Proxmox maps to this naturally:

VMware type Proxmox equivalent How
Thin provisioned LVM-thin LV or qcow2 Default on LVM-thin storage
Thick lazy zeroed raw format on dir storage fallocate pre-allocates without zeroing
Thick eager zeroed raw + dd if=/dev/zero Manual zeroing at creation

For most workloads, LVM-thin (raw format on thin pool) gives you the best of both: thin allocation + raw performance.


Snapshots

Feature VMware Proxmox
Snapshot mechanism VMFS delta disk (.vmdk) LVM snapshot or qcow2 overlay
VM memory snapshot Yes (suspend state) Yes (with -vmstate)
Snapshot tree Yes, GUI-managed Yes, GUI-managed
Max recommended depth 3 (performance degrades) 3 (same guidance)
Snapshot deletion Consolidate in vCenter qm delsnapshot
# Create a snapshot of VM 100
qm snapshot 100 pre-upgrade --description "Before kernel update"

# List snapshots
qm listsnapshot 100

# Rollback
qm rollback 100 pre-upgrade

# Delete snapshot
qm delsnapshot 100 pre-upgrade

VMware Tools → QEMU Guest Agent

VMware Tools provides heartbeat, guest IP reporting, quiesced snapshots, and graceful shutdown. Proxmox uses the QEMU guest agent for the same functions.

Linux guests:

apt install qemu-guest-agent    # Debian/Ubuntu
dnf install qemu-guest-agent    # RHEL/Rocky
systemctl enable --now qemu-guest-agent

Windows guests: Install the VirtIO drivers ISO from Proxmox (includes the guest agent MSI).

Enable in Proxmox:

qm set 100 --agent 1

Or in the GUI: VM → Options → QEMU Guest Agent → enabled.


Network storage: same protocols, different names

What VMware Proxmox
NFS shared storage NFS datastore nfs storage type
iSCSI SAN iSCSI adapter + VMFS iscsi + lvm storage type
FC SAN FC adapter + VMFS lvm over FC device
SMB/CIFS Not natively supported for VM disks cifs storage type

Quick mental map

VMware concept          →    Proxmox equivalent
─────────────────────────────────────────────────
Datastore               →    Storage (storage.cfg)
VMFS                    →    LVM-thin or ZFS
VMDK (thin)             →    qcow2 or raw on LVM-thin
VMDK (thick eager)      →    raw + pre-zeroed
RDM                     →    disk passthrough (by-id)
vSAN                    →    Ceph
Storage vMotion         →    qm move_disk
VMware Tools            →    QEMU guest agent
vCenter inventory        →    Proxmox web GUI / pvesh
ESXi host               →    Proxmox node

Next: Chapter 7 — Migrating VMs from VMware to Proxmox