[Solved] How can add a new Hard Disk and move the /var partition

[update] @robb suggest to just increase the vdisk size in proxmox; and @fausp give detailed steps to increase the LVM partition inside the NS vm.


NethServer Version: 7.5.1804 (beta)
** Base OS:** Proxmox
** Hard disks:** 4HDD of 1 TB each with ZFS “raid 10”

  • How can add a new virtual disk, to mount as the /var partition, so it can be used to store the soon to be folders for my users, and to add more modules later.
  • Or (Hope not) I need to re-install the full nethserver and re-create the AD?
  • Maybe I can backup and restore all my users and groups to accomplish this task?

Of course I can delete and recreate the shared folders.

In case I need to reinstall everything. What is not so clear to me is how to tell to NS that I want to use one virtual disk for the root partition and other disk for the /var partition. I really don’t understand how to tame the anaconda on the disk setup.

image

image

image

My NS virtual machine status:

Regards

Some docs:

https://pve.proxmox.com/wiki/Storage
https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)

And then you need to format and mount it in the VM, found some threads about that:

Be careful, you should know what you’re doing as you may harm your system with these operations.

https://wiki.nethserver.org/doku.php?id=userguide:install_nethserver_on_proxmox

Not necessarily.

That’s an official migration way and will work but as you have virtualization you don’t really have to reinstall.

1 Like

If you use proxmox, as said by @mrmarkuz, you have to create that disk in proxmox. Then you can mount the new vdisk in your NethServer install as /var/lib/nethserver so your server data (shares and homedirs) will be on a separate vdisk.
But to be honest, since you are using proxmox, what benefit do you think to gain by creating an extra vdisk for data? Taking a snapshot and using your usual backup options in NethServer will at least be as safe as having the extra vdisk. Keep in mind that the extra vdisk is in the same ZFS pool as your vserver. So if your ZFS pool breaks, it doesn’t matter if you have an extra vdisk or not for data.

Thank you @mrmarkuz @robb :
I have more to learn with your posts.

:thinking: maybe I wasn’t thinking correctly, I got some pressure here to get this server running at the first shot. With some little practice before of course, but I really want this to be the one in production.

I forgot about LVM; that I’m rarely touching ever; that’s why I asked how to add the disk in NS.

What If I go for the next option?
Increase the vdisk size of then VM from 50GB to 500GB (so I can take vm snapshots, etc in proxmox).
NS can detect and use the new size?
I need to do something in NS to use the new size?

Something like: lvm-resize-how-to-increase-an-lvm-partition (reading…)

So I will try this:

  • Increase the only vdisk on proxmox for NS from 50GB to 500GB (backup first)
  • read how to resize the lvm partition

Now the question is… the partition to increase is " /dev/mapper/VolGroup-lv_root" right?

# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root   47G   14G   34G  29% /
devtmpfs                      1.9G     0  1.9G   0% /dev
tmpfs                         1.9G     0  1.9G   0% /dev/shm
tmpfs                         1.9G  8.6M  1.9G   1% /run
tmpfs                         1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                    1014M  184M  831M  19% /boot
tmpfs                         379M     0  379M   0% /run/user/0

output of vgdisplay, lvdisplay:

# vgdisplay
--- Volume group ---
VG Name               VolGroup
System ID             
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               <49.00 GiB
PE Size               4.00 MiB
Total PE              12543
Alloc PE / Size       12542 / 48.99 GiB
Free  PE / Size       1 / 4.00 MiB
VG UUID               Ci0PmN-UPop-0403-OgQp-RkUr-dNUT-njpDCZ


# lvdisplay
--- Logical volume ---
LV Path                /dev/VolGroup/lv_root
LV Name                lv_root
VG Name                VolGroup
LV UUID                vTEZSO-TXcH-jqCd-nrwb-UeRW-6qlz-MwPuGe
LV Write Access        read/write
LV Creation host, time avion.lan, 2018-04-13 13:50:44 -0600
LV Status              available
# open                 1
LV Size                46.99 GiB
Current LE             12030
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     8192
Block device           253:0

--- Logical volume ---
LV Path                /dev/VolGroup/lv_swap
LV Name                lv_swap
VG Name                VolGroup
LV UUID                99OZhe-RSgF-6Ox4-qeGE-M0hf-h0N7-k31l2b
LV Write Access        read/write
LV Creation host, time avion.lan, 2018-04-13 13:50:44 -0600
LV Status              available
# open                 2
LV Size                2.00 GiB
Current LE             512
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     8192
Block device           253:1
-

Again, Regards and Thanks for the Help :gift_heart:

You will LOVE proxmox: in proxmox you can increase vdisk size on the fly… :slight_smile:


Fill in how mani GiB you want to increase the vdisk and it’s done on the fly. In your case, you want to increase from 50GiB to 500GiB, you fill in 450GiB…

Yeah I already love it.
I tested before the “resize disk” option and it works, but not using NS.
Too many projects, options and work is a little overwhelming to keep the pace.

The backup is done, I keep reading your comments. Thank you!

It works for NethServer too… A few months ago I had to resize my main VM from 500GB to 900GB. It went flawlessly.

I increase the size, with the VM/NS off.
Proxmox show the new size, after reboot I don’t see the free space on the NS VM :thinking:
image

And after another reboot… same size 50GB and Free just 4MiB
image

# pvs
PV         VG       Fmt  Attr PSize   PFree
/dev/sda2  VolGroup lvm2 a--  <49.00g 4.00m
# vgs
VG       #PV #LV #SN Attr   VSize   VFree
VolGroup   1   2   0 wz--n- <49.00g 4.00m
# lvs
LV      VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 46.99g                                                    
lv_swap VolGroup -wi-ao----  2.00g

Something weird, if i try to shutdown the NS from the “Administration | Shutdown” or in a terminal (using ssh or in the console in proxmox).

Then in Proxmox I look the vm sumary it shows running, I wait some minutes, refresh the page and still shows the vm running.

I need to click on the button to “really” shutdown the NS vm.

¿Maybe something is broken on my NS?

/edit1: I need to wait for almost 5 minutes to see the vm off. I’ll try to reboot the proxmox and see if something change about the new vdiks size.

/edit2: the size is not changed in NS. I’ll setup and take a backup NS. Hopefully to do a disaster-recovery

/edit3: searching for this error in proxmox: “proxmox error vm quit/powerdown failed got timeout”:


there is suggestion to install acpid

/edit4: Creating another vm for NS… so I can learn about the disaster-recovery. Now using a 500GB vdisk

I would say you have to resize the disk under NS now…

Did you activate the Qemu Agent under Proxmox? (click on the vm > Options > Qemu Agent). After that install the qemu-guest-agent under NS… I think you need to shutdown the vm to take affect…

1 Like

ops! Not installed… I’ll do right now.

How can I do that? (have a doc)

I am testing atm…

Can not test it with my Server but take a look, I think this could work:

Expanding a LVM partition to fill remaining drive space

Take care and make a working backup befor you test it…

Thanks @fausp, I’ll read that.

I wonder if I just crippled my NS: I have installed the acpi/services a few hours ago; and right now the qemu-guest-agent.
Now, I try to shutdown the NS from proxmox and no, the vm is still running.

I forgot to test it :no_mouth:

qm agent 100 ping

that test don’t give me an output :disappointed_relieved:

/edit: The wiki says: "if the qemu-guest-agent is correctly runnning in the VM, it will return without an error message. " :crazy_face: its official, I’m tired.

Please test:

Lets say we have a NS7-VM under Proxmox 5.x and we want to increase the disk size from 500 up to 2000.

1. Click on the NS7-VM > Hardware > Hard Disk (xxx) > Resize disk > 1500
2. fdisk /dev/sda > p > d > 2 > n > p > 2 > First sector <Enter> > Last sector <Enter> w
3. pvresize /dev/sda2
4. lvresize -l +100%FREE /dev/VolGroup/lv_root
5. xfs_growfs /dev/VolGroup/lv_root
1 Like

“d” will delete the partition, then data will be gone?
Testing right now (backup is made); how can I learn if I don’t broke things right?

Yes but with:

You create a new one that uses the whole space of the disk…

Wow!

pvresize /dev/sda2
Physical volume "/dev/sda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
# lvresize -l +100%FREE /dev/VolGroup/lv_root
  Size of logical volume VolGroup/lv_root changed from 46.99 GiB (12030 extents) to <497.00 GiB (127231 extents).
  Logical volume VolGroup/lv_root successfully resized.

xfs_growfs /dev/VolGroup/lv_root
meta-data=/dev/mapper/VolGroup-lv_root isize=512    agcount=4, agsize=3079680 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=12318720, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=6015, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 12318720 to 130284544

df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root  497G   14G  484G   3% /
devtmpfs                      1.9G     0  1.9G   0% /dev
tmpfs                         1.9G     0  1.9G   0% /dev/shm
tmpfs                         1.9G  8.6M  1.9G   1% /run
tmpfs                         1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                    1014M  184M  831M  19% /boot
tmpfs                         379M     0  379M   0% /run/user/0

Lets say we have a NS7-VM under Proxmox 5.x and we want to increase the disk size from 500 up to 2000.

Under the Proxmox GUI:

1. Click on the NS7-VM > Hardware > Hard Disk (xxx) > Resize disk > 1500

On the NS7 Console:

2. fdisk /dev/sda > p > d > 2 > n > p > 2 > First sector <Enter> > Last sector <Enter> w

3. reboot the server If you get this:
    WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
    The kernel still uses the old table. The new table will be used at
    the next reboot or after you run partprobe(8) or kpartx(8)

4. pvresize /dev/sda2
    
5. lvresize -l +100%FREE /dev/VolGroup/lv_root
    
6. xfs_growfs /dev/VolGroup/lv_root
5 Likes

Trying to reboot from inside the NS… nothing :upside_down_face:

I need to “reset” from proxmox.
It’s Up, and the files, folders, users & groups are intact

Seriously, I barely understand what I’m doing. I was “almost sure” the data will be gone with the “p > 2” (partition delete)

How did this magic happen?:star_struck: (it’s the Linux magic and their Wizards)

1 Like