It works for NethServer too… A few months ago I had to resize my main VM from 500GB to 900GB. It went flawlessly.
I increase the size, with the VM/NS off.
Proxmox show the new size, after reboot I don’t see the free space on the NS VM
And after another reboot… same size 50GB and Free just 4MiB
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup lvm2 a-- <49.00g 4.00m
# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- <49.00g 4.00m
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 46.99g
lv_swap VolGroup -wi-ao---- 2.00g
Something weird, if i try to shutdown the NS from the “Administration | Shutdown” or in a terminal (using ssh or in the console in proxmox).
Then in Proxmox I look the vm sumary it shows running, I wait some minutes, refresh the page and still shows the vm running.
I need to click on the button to “really” shutdown the NS vm.
¿Maybe something is broken on my NS?
/edit1: I need to wait for almost 5 minutes to see the vm off. I’ll try to reboot the proxmox and see if something change about the new vdiks size.
/edit2: the size is not changed in NS. I’ll setup and take a backup NS. Hopefully to do a disaster-recovery
/edit3: searching for this error in proxmox: “proxmox error vm quit/powerdown failed got timeout”:
there is suggestion to install acpid
/edit4: Creating another vm for NS… so I can learn about the disaster-recovery. Now using a 500GB vdisk
I would say you have to resize the disk under NS now…
Did you activate the Qemu Agent under Proxmox? (click on the vm > Options > Qemu Agent). After that install the qemu-guest-agent under NS… I think you need to shutdown the vm to take affect…
ops! Not installed… I’ll do right now.
How can I do that? (have a doc)
I am testing atm…
Can not test it with my Server but take a look, I think this could work:
Expanding a LVM partition to fill remaining drive space
Take care and make a working backup befor you test it…
Thanks @fausp, I’ll read that.
I wonder if I just crippled my NS: I have installed the acpi/services a few hours ago; and right now the qemu-guest-agent.
Now, I try to shutdown the NS from proxmox and no, the vm is still running.
I forgot to test it
qm agent 100 ping
that test don’t give me an output
/edit: The wiki says: "if the qemu-guest-agent is correctly runnning in the VM, it will return without an error message. " its official, I’m tired.
Please test:
Lets say we have a NS7-VM under Proxmox 5.x and we want to increase the disk size from 500 up to 2000.
1. Click on the NS7-VM > Hardware > Hard Disk (xxx) > Resize disk > 1500
2. fdisk /dev/sda > p > d > 2 > n > p > 2 > First sector <Enter> > Last sector <Enter> w
3. pvresize /dev/sda2
4. lvresize -l +100%FREE /dev/VolGroup/lv_root
5. xfs_growfs /dev/VolGroup/lv_root
“d” will delete the partition, then data will be gone?
Testing right now (backup is made); how can I learn if I don’t broke things right?
Yes but with:
You create a new one that uses the whole space of the disk…
Wow!
pvresize /dev/sda2
Physical volume "/dev/sda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
# lvresize -l +100%FREE /dev/VolGroup/lv_root
Size of logical volume VolGroup/lv_root changed from 46.99 GiB (12030 extents) to <497.00 GiB (127231 extents).
Logical volume VolGroup/lv_root successfully resized.
xfs_growfs /dev/VolGroup/lv_root
meta-data=/dev/mapper/VolGroup-lv_root isize=512 agcount=4, agsize=3079680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=12318720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=6015, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 12318720 to 130284544
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 497G 14G 484G 3% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.6M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 184M 831M 19% /boot
tmpfs 379M 0 379M 0% /run/user/0
Lets say we have a NS7-VM under Proxmox 5.x and we want to increase the disk size from 500 up to 2000.
Under the Proxmox GUI:
1. Click on the NS7-VM > Hardware > Hard Disk (xxx) > Resize disk > 1500
On the NS7 Console:
2. fdisk /dev/sda > p > d > 2 > n > p > 2 > First sector <Enter> > Last sector <Enter> w
3. reboot the server If you get this:
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
4. pvresize /dev/sda2
5. lvresize -l +100%FREE /dev/VolGroup/lv_root
6. xfs_growfs /dev/VolGroup/lv_root
Trying to reboot from inside the NS… nothing
I need to “reset” from proxmox.
It’s Up, and the files, folders, users & groups are intact
Seriously, I barely understand what I’m doing. I was “almost sure” the data will be gone with the “p > 2” (partition delete)
How did this magic happen? (it’s the Linux magic and their Wizards)
Hope that helped, I am going to sleep now… Have fun
It helps, a lot.
Thank you and good night.
marked as * solved.
You are welcome…
How do you know the first sector and the last sector?
I am running into this issue.
You dont have to know it, just watch the output…