Proxmox - local storage help needed

I’m running two nearly identical 6.4.13 proxmox boxes. Config/setup of the first machine looks like this:

pvs

PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- <837.83g 15.99g

vgs

VG #PV #LV #SN Attr VSize VFree
pve 1 5 0 wz–n- <837.83g 15.99g

lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 679.95g 27.25 1.58
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 32.00g
vm-100-disk-0 pve Vwi-aotz-- 360.00g data 36.70
vm-101-disk-0 pve Vwi-aotz-- 60.00g data 88.55

df -h

Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 9.3M 3.2G 1% /run
/dev/mapper/pve-root 96G 92G 4.3G 96% /
tmpfs 16G 43M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/fuse 30M 16K 30M 1% /etc/pve
/dev/sdd1 3.7T 90G 3.6T 3% /media/backup
/dev/sdb1 280G 318M 279G 1% /media/cache
tmpfs 3.2G 0 3.2G 0% /run/user/0

And the second one:

pvs

PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- <837.83g 15.99g

vgs

VG #PV #LV #SN Attr VSize VFree
pve 1 8 0 wz–n- <837.83g 15.99g

lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 679.95g 32.50 1.84
root pve -wi-ao---- 96.00g
snap_vm-100-disk-0_vanilla pve Vri—tz-k 360.00g data vm-100-disk-0
snap_vm-200-disk-0_migriert pve Vri—tz-k 5.00g data vm-200-disk-0
snap_vm-200-disk-0_vanilla pve Vri—tz-k 5.00g data vm-200-disk-0
swap pve -wi-ao---- 32.00g
vm-100-disk-0 pve Vwi-aotz-- 360.00g data 59.94
vm-200-disk-0 pve Vwi-a-tz-- 5.00g data 55.36

df -h

Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 50M 3.1G 2% /run
/dev/mapper/pve-root 96G 8.1G 88G 9% /
tmpfs 16G 43M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
/dev/sdb1 280G 318M 279G 1% /media/cache
/dev/sdc1 2.8T 870G 1.9T 32% /media/backup
tmpfs 3.2G 0 3.2G 0% /run/user/0

What I don’t understand is the difference between the /dev/mapper/pve-root. The first machine shows 96% data use and the second one only 9%.

Here some more info:

On both machines:

cat /etc/fstab

/dev/pve/root / xfs defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=8b87e2b6-43af-4bec-b7e1-a8751abbd6f4 /media/backup xfs defaults,noatime,auto,nofail 0 2
UUID=cb9edf38-9e9e-489e-bd4f-18126347b3d6 /media/cache xfs defaults,noatime,auto,nofail 0 2

and

cat /etc/vzdump.conf

vzdump default settings

tmpdir: /media/cache
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#stdexcludes: BOOLEAN
#mailto: ADDRESSLIST
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
#pigz: N

and:

machine 1:
ncdu / shows: 85.3 GiB [##########] /media

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 838.3G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 837.9G 0 part
├─pve-swap 253:0 0 32G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 7G 0 lvm
│ └─pve-data-tpool 253:4 0 680G 0 lvm
│ ├─pve-data 253:5 0 680G 0 lvm
│ ├─pve-vm–100–disk–0 253:6 0 360G 0 lvm
│ └─pve-vm–101–disk–0 253:7 0 60G 0 lvm
└─pve-data_tdata 253:3 0 680G 0 lvm
└─pve-data-tpool 253:4 0 680G 0 lvm
├─pve-data 253:5 0 680G 0 lvm
├─pve-vm–100–disk–0 253:6 0 360G 0 lvm
└─pve-vm–101–disk–0 253:7 0 60G 0 lvm
sdb 8:16 0 279.4G 0 disk
└─sdb1 8:17 0 279.4G 0 part /media/cache
sdd 8:48 0 3.7T 0 disk
└─sdd1 8:49 0 3.7T 0 part /media/backup
sr0 11:0 1 1024M 0 rom

machine 2:
ncdu / shows: 867.0 GiB [##########] /media

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 838.3G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 837.9G 0 part
├─pve-swap 253:0 0 32G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 7G 0 lvm
│ └─pve-data-tpool 253:4 0 680G 0 lvm
│ ├─pve-data 253:5 0 680G 0 lvm
│ ├─pve-vm–100–disk–0 253:6 0 360G 0 lvm
│ └─pve-vm–200–disk–0 253:7 0 5G 0 lvm
└─pve-data_tdata 253:3 0 680G 0 lvm
└─pve-data-tpool 253:4 0 680G 0 lvm
├─pve-data 253:5 0 680G 0 lvm
├─pve-vm–100–disk–0 253:6 0 360G 0 lvm
└─pve-vm–200–disk–0 253:7 0 5G 0 lvm
sdb 8:16 0 279.4G 0 disk
└─sdb1 8:17 0 279.4G 0 part /media/cache
sdc 8:32 0 2.7T 0 disk
└─sdc1 8:33 0 2.7T 0 part /media/backup
sr0 11:0 1 1024M 0 rom

I can’t remember how I increased/added the space on the second machine. Any help is appreciated.

regards,
stefan

@schulzstefan

Hi Stefan

If you’re using XFS in Proxmox, with LVM, this might help:

https://wiki.nethserver.org/doku.php?id=userguide:nethserver_and_proxmox#nethserver_in_proxmoxenlarging_your_nethserver_disk

Adapt the drives / paths as needed…

Good luck!

My 2 cents
Andy

Hi Andy,

thank you for jumping on.

Looking at the pve-root shows for both machines 96G. But the first one is filled up to 96% the second one only to 9%. How can I investigate where/why the first one is nearly filled up?

Got it. Stupid error of me. Here’s the solution:

proxmox mount error

@schulzstefan

Filled the wrong disk… A classic! :slight_smile:

My 2 cents
Andy

Yep. A little tricky. Unmounting the (USB-)storage under /media/backup did it. After this a changing in the directory /media/backup showed a large backup file. A delete of the file and remounting the USB fixed the issue. Everything’s fine again.

regards,
stefan

1 Like