SAN, LUNs and Proxmox

This chapter is about connecting enterprise shared storage to Proxmox and using LVM on top of it. If you already know LVM from previous chapters, this is where it gets applied in the real world.


What is a SAN?

A SAN (Storage Area Network) is a dedicated high-speed network whose only job is carrying storage traffic between servers and storage arrays. It is separate from your regular LAN.

Think of it this way:

  • Your LAN carries emails, web traffic, user data
  • Your SAN carries only one thing: reads and writes to disks

The storage array itself is a box full of fast disks, managed centrally. Instead of each server having its own local disks, they all reach out to the array over the SAN fabric and get blocks of storage — these blocks are called LUNs.


What is a LUN?

A LUN (Logical Unit Number) is a slice of storage from the array, presented to a server as if it were a local disk.

From Linux’s perspective, a LUN looks identical to /dev/sda — it’s just a block device. You can run fdisk, pvcreate, mkfs.ext4, or anything else on it. The fact that it lives on a SAN a rack away is invisible to the operating system.

Storage Array                      Linux Server
─────────────────────────────      ─────────────────────────
  Internal disks: 48 × 4TB         "I have a disk /dev/sdb"
  
  LUN 0: 2 TB slice ──────────────►  /dev/sdb  (to the OS it's just a disk)
  LUN 1: 500 GB slice ────────────►  /dev/sdc
  LUN 2: 4 TB slice ──────────────►  /dev/sdd

The assignment of LUNs to servers is done through:

  • LUN masking — the array decides which server can see which LUN
  • Zoning (FC only) — the fabric switch decides which HBA ports can talk to which storage ports

SAN protocols: iSCSI vs Fibre Channel

Two protocols dominate enterprise SANs:

  iSCSI Fibre Channel (FC)
Physical network Standard Ethernet (10GbE, 25GbE) Dedicated FC switches (4/8/16/32 Gbps)
Cost Lower — uses existing network gear Higher — dedicated HBAs and switches
Setup Software initiator built into Linux Requires HBA cards
Performance Very good on 10GbE+ Excellent, low latency
Common in SMB, smaller enterprise, homelab Large enterprise, finance, healthcare
Linux device /dev/sdX via open-iscsi /dev/sdX via HBA driver

Both result in the same thing on Linux — a block device you can use with LVM.


The full architecture

SAN + PROXMOX CLUSTER ARCHITECTURE STORAGE ARRAY LUN 0 — 2 TB LUN 1 — 1 TB LUN 2 — 500 GB LUN 3 — 4 TB LUN masking controls access 2× redundant storage controllers FC Switch Fabric A FC Switch Fabric B Path A Path B PROXMOX NODE 1 HBA 0 Fabric A port HBA 1 Fabric B port dm-multipath combines both paths /dev/mapper/mpath0 LVM PV → VG → LV Proxmox VM disk PROXMOX NODE 2 HBA 0 Fabric A port HBA 1 Fabric B port dm-multipath combines both paths /dev/mapper/mpath0 LVM PV → VG → LV Proxmox VM disk ⚠ Shared SAN LUNs: each node sees the same LUN — co-ordinate access carefully. For shared VM storage across nodes use Proxmox + iSCSI/LVM with one active node, or Ceph for true concurrent shared storage.

Multipath I/O — why it matters and how to set it up

In production, a server connects to the SAN via two separate physical paths — different HBAs, different cables, different switches. This is called multipath and gives you:

  • Redundancy — if one path fails, the other takes over automatically
  • Load balancing — traffic can be spread across both paths

Without multipath configured, Linux sees two separate block devices for the same LUN (/dev/sdb and /dev/sdc might be the same LUN via different paths). Writing to both would corrupt data. dm-multipath combines them into one logical device.

Setting up multipath on Proxmox / Debian

# Install
apt install multipath-tools

# Enable on boot
systemctl enable multipathd
systemctl start multipathd

# Check what LUNs are visible
multipath -ll

# Typical output:
# mpatha (360000000000000001) dm-0 VENDOR,PRODUCT
# size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
# |-+- policy='service-time 0' prio=50 status=active
# | `- 2:0:0:1  sdb 8:16  active ready running
# `-+- policy='service-time 0' prio=10 status=enabled
#   `- 3:0:0:1  sdc 8:32  active ready running

The LUN now appears as /dev/mapper/mpatha (or /dev/mapper/mpath0 — the name depends on your config). Always use the multipath device path, not the raw /dev/sdb or /dev/sdc paths.

/etc/multipath.conf — basic setup

defaults {
    user_friendly_names yes    # names like mpatha instead of WWID
    find_multipaths yes
    polling_interval 5
}

blacklist {
    devnode "^sda"             # Don't multipath your boot disk
}

After editing:

systemctl restart multipathd
multipath -ll                  # Verify

iSCSI with Proxmox — step by step

iSCSI uses your existing Ethernet network. The storage array is the target (server side), your Proxmox host is the initiator (client side).

Step 1 — Install and configure the initiator

apt install open-iscsi

# Set the initiator name (unique per node — change node1 for each host)
echo "InitiatorName=iqn.2024-01.com.proxmox:node1" > /etc/iscsi/initiatorname.iscsi

systemctl enable iscsid
systemctl start iscsid

Step 2 — Discover targets on the SAN

# Replace 192.168.10.100 with your SAN's IP
iscsiadm -m discovery -t sendtargets -p 192.168.10.100

Output:

192.168.10.100:3260,1 iqn.2020-01.com.storage:target1
192.168.10.100:3260,1 iqn.2020-01.com.storage:target2

Step 3 — Log in to a target

iscsiadm -m node -T iqn.2020-01.com.storage:target1 -p 192.168.10.100 --login

# Make it persist across reboots
iscsiadm -m node -T iqn.2020-01.com.storage:target1 -o update -n node.startup -v automatic

Step 4 — Verify the LUN appeared

lsblk
# You'll see a new /dev/sdX device — this is your LUN

Proxmox storage types for SAN

Proxmox has several storage backends that work with SAN LUNs. These are configured in /etc/pve/storage.cfg or the Proxmox web GUI under Datacenter → Storage → Add.

Use iSCSI to present the LUN, then LVM on top to manage VM disks:

# /etc/pve/storage.cfg

iscsi: san-iscsi
    portal 192.168.10.100
    target iqn.2020-01.com.storage:target1
    content none               # iSCSI itself doesn't store VMs directly

lvm: san-lvm
    base san-iscsi:0.0.0.1000000000000   # references the iSCSI LUN
    vgname vg_san
    content rootdir,images
    shared 0                   # set to 1 if multiple nodes share this VG (needs CLVM)

Or add via GUI: Datacenter → Storage → Add → LVM

Option B: iSCSI direct (LUNs as raw VM disks)

Each LUN becomes a potential VM disk. No LVM, less flexible:

iscsi: san-raw
    portal 192.168.10.100
    target iqn.2020-01.com.storage:target1
    content images

Option C: LVM-thin over SAN LUN

Best for VM environments — enables thin provisioning and fast snapshots:

lvmthin: san-thin
    thinpool data
    vgname vg_san
    content rootdir,images

LVM-thin is the closest Proxmox equivalent to VMware’s VMFS thin-provisioned datastores. Use it when you want snapshots and clone efficiency.


LUN to VM disk — the full stack

SAN LUN 2 TB block MPIO /dev/mapper /mpatha pvcreate PV lvm header vgcreate vg_san 2 TB pool Proxmox vm-100-disk-0 logical volume Proxmox VM sees it as virtio disk SAN LUN → multipath device → LVM Physical Volume → Volume Group → Logical Volume → VM disk

Shared SAN storage in a Proxmox cluster

Do not use standard LVM on a SAN LUN that multiple Proxmox nodes access simultaneously. Standard LVM does not have cluster awareness — two nodes writing to the same VG will corrupt it.

Your options for shared SAN in a Proxmox cluster:

Approach How it works Use case
One node owns the VG Only one node has the iSCSI/FC connection active HA with node pinning
iSCSI + LVM shared=1 LVM with file-based locking via Proxmox cluster Active/passive HA
Ceph Software-defined storage, built into Proxmox True concurrent shared storage
NFS on top of SAN SAN presents NFS share, all nodes mount it Simple, works everywhere

For most enterprise migrations from VMware (where all nodes shared VMFS datastores), Ceph is the closest equivalent to vSAN, and NFS over SAN is the simplest drop-in replacement for a shared VMFS datastore.


Adding SAN storage in the Proxmox GUI

Datacenter → Storage → Add → iSCSI

Field Value
ID san-iscsi (your label)
Portal 192.168.10.100 (SAN IP)
Target Select from dropdown (auto-discovered)
Content Disk image, Container

Then add LVM on top:

Datacenter → Storage → Add → LVM

Field Value
ID san-lvm
Base Storage san-iscsi
Base Volume Select the LUN
Volume group vg_san
Content Disk image, Container