Move samba share and homes on a raid

NethServer Version: NS8
Module: samba

Ok my ns8 is composed of a 1Tb ssd on wich the system is installed. then i got a raid1 composed of three 2Tb HDD 5 2 sync and 1 spare) /dev/md0 mounted permanently in /srv/raid.
What i want is to get the samba shares and homes there. (then i’ll backup this to usb and offsite). It was hard to deal with NS8 since i’m coming from ns7 and before koozali and before e-smith and absolutly not routinly dealing with linux. Just using it for my office. I must say that IA is really helping in the domain.

I particurlarly struggle with understanding the way podman is structured and accessed. eg : All runagent commands does not work on ns8 (at least on mine) and the forum is full of it. until i understood that you need to log in the pod and then use podman command straight like this one

podman volume ls

Here is where am i today and i tried to sumup what i collected here and with the help of IA. Any comment on :

  • wether it will work.
  • whether this is the right way to do it
  • wether it’s persistent
  • Any omission?
### ACCESS to samba pod ###
su - samba1 # depends of your samba pod name

### LIST SAMBA VOLUMES ###
podman volume ls
DRIVER      VOLUME NAME
local       homes
local       shares
local       data
local       config
local       timescaledb
local       restic-cache

# Inspect shares volume
[samba1@ns8-leader ~]$ podman volume inspect shares
[
     {
          "Name": "shares",
          "Driver": "local",
          "Mountpoint": "/home/samba1/.local/share/containers/storage/volumes/shares/_data",
          "CreatedAt": "2025-10-15T10:37:42.915729361+02:00",
          "Labels": {},
          "Scope": "local",
          "Options": {},
          "MountCount": 0,
          "NeedsCopyUp": true,
          "LockNumber": 2
     }
]

#identifier pod samba
[samba1@ns8-leader ~]$ podman ps
CONTAINER ID  IMAGE                                        COMMAND     CREATED       STATUS       PORTS       NAMES
d7efd74b16af  docker.io/timescale/timescaledb:2.21.1-pg17  postgres    30 hours ago  Up 30 hours  5432/tcp    timescaledb
3c48d6120d4b                          30 hours ago  Up 30 hours              samba-dc

# Exit samba pod
exit


### ---OPERATIONS TO MOVE SAMBA SHARE VOLUME TO ANOTHER LOCATION--- ###

# check raid is ready and creat directory for samba shares and homes
mkdir -p /srv/raid/samba_shares
chown samba1:samba1 /srv/raid/samba_shares

mkdir -p /srv/raid/samba_homes
chown samba1:samba1 /srv/raid/samba_homes

# Stop samba pod
su - samba1
podman stop samba-dc
exit

# Copy existing shares and homes to new location - use rsync to conserve permissions
rsync -av /home/samba1/.local/share/containers/storage/volumes/shares/_data/ /srv/raid/samba_shares/
rsync -av /home/samba1/.local/share/containers/storage/volumes/homes/_data/ /srv/raid/samba_homes/

### POD OPERATIONS ###
# Log as samba 1 user
su - samba1

# Remove existing samba shares volume
podman volume rm shares
podman volume rm homes

# Recreate samba shares volume pointing to new location
podman volume create --opt device=/srv/raid/samba_shares --opt type=bind --opt o=bind shares
podman volume create --opt device=/srv/raid/samba_homes --opt type=bind --opt o=bind shares

# restart samba pod
podman start samba-dc

# Exit samba pod
exit
3 Likes

Hi @cyberjuls and welcome!

To start, maybe this how-to helps you better understanding the basics? I am sure soembody wll pitch in n the samba specifics.

Cheers!

1 Like

Following thread explains the options about having samba shares on another disk:

1 Like

Note aside my ns8 is based on rocky linux.

Thanks, i’ve read all thoses threads but none has given me a view of the full process and the impression that it was a validated solution, beside this runagent singualrity on my server.
I’ve a samba installed on my NS8 and workstation connected to it. So i don’t want to uninstall samba as it will imply to disconnect windows workstations, purge user and join them again wich on windows is long and tedious.
Also the day i can throw a ssd nas, i’ll move samba shares there and keep the HDD for heavy geodata wich don’t need to be accessed that quick.

As i understand homes should stay on the system disk due to initialisation?

There’s no validated solution yet but it’s a planned feature, see Assign specific app volumes to dedicated storage · Issue #7665 · NethServer/dev · GitHub

It looks good to me but it would need to be tested.

Sorry, I don’t understand, could you please explain?

As regards the samba homes volume it should the same as for the shares volume. It can be changed but just before samba is initialised.

The hosts /home directory where the apps are stored can be changed, see Disk usage — NS8 documentation

2 Likes

when i type commands containing runagent it tolds me this command does not exist

[root@ns8-leader _data]# runagent -m samba1 podman volume ls
commande introuvable

but if i log as samba1 user

su - samba1

and type

podman volume ls 

it works…

Will look also how to virtualize everything as it seem more adequate.

1 Like

Maybe you’re logged in as user and didn’t get the right environment and PATH when getting root? Did you just use su without the -?

You could use sudo su - or su - to get root, see also Installation — NS8 documentation

The runagent command is located in /usr/local/bin

[root@node ~]# whereis runagent
runagent: /usr/local/bin/runagent

…so the following should work too:

/usr/local/bin/runagent -m samba1

That’s a good idea, Proxmox and ESXi are supported.

Will check tomorow, but i’m logged in as admin and then up in root as you can see in my command.
This is a fresh ns8 install.

1 Like

Hey yes just checked now.

Yes runagent is here! but not included in the out of the box config (no /usr/local/bin path):

[root@ns8-leader admin]# cd /usr/local/bin/
[root@ns8-leader bin]# ls
acl-load  api-cli      api-server       api-server-motd  logcli.bin        runagentagent     api-moduled  api-server-logs  logcli           redis-wait-ready
[root@ns8-leader bin]# echo $PATH
/root/.local/bin:/root/bin:/sbin:/bin:/usr/sbin:/usr/bin
[root@ns8-leader bin]#

but… it seems that in admin mode path is here….

[admin@ns8-leader ~]$ cd /usr/local/bin/
[admin@ns8-leader bin]$ ls
acl-load  api-cli      api-server       api-server-motd  logcli.bin        runagentagent     api-moduled  api-server-logs  logcli           redis-wait-ready
[admin@ns8-leader bin]$ echo $PATH
/home/admin/.local/bin:/home/admin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
[admin@ns8-leader bin]$
1 Like

If you’re logged in as admin, which is just a user (no root perms) on the system, you can get root by executing su - or if sudo is configured you can use sudo su -.
If you add the minus sign, (-) the PATH should be set correctly and the commands like runagent should just work.

See for example https://linuxize.com/post/su-command-in-linux/ for more info.

1 Like
[admin@ns8-leader ~]$ sudo su -
[sudo] Mot de passe de admin :
[root@ns8-leader ~]# runagent -m samba1 podman volume ls
DRIVER      VOLUME NAME
local       homes
local       shares
local       data
local       config
local       timescaledb
local       restic-cache

Ok it’s working now that was the dash that i was missing when typin my command sudo su

All this part went well :

### LIST SAMBA VOLUMES ###

runagent -m samba1 podman volume ls
DRIVER      VOLUME NAME
local       homes
local       shares
local       data
local       config
local       timescaledb
local       restic-cache

# Inspect shares volume

[samba1@ns8-leader ~]$ runagent -m samba1 podman inspect shares
[
     {
          "Name": "shares",
          "Driver": "local",
          "Mountpoint": "/home/samba1/.local/share/containers/storage/volumes/shares/_data",
          "CreatedAt": "2025-10-15T10:37:42.915729361+02:00",
          "Labels": {},
          "Scope": "local",
          "Options": {},
          "MountCount": 0,
          "NeedsCopyUp": true,
          "LockNumber": 2
     }
]

#identifier pod samba
[samba1@ns8-leader ~]$ runagent -m samba1 podman ps
CONTAINER ID  IMAGE                                        COMMAND     CREATED       STATUS       PORTS       NAMES
d7efd74b16af  docker.io/timescale/timescaledb:2.21.1-pg17  postgres    30 hours ago  Up 30 hours  5432/tcp    timescaledb
3c48d6120d4b                          30 hours ago  Up 30 hours              samba-dc

### OPERATIONS TO MOVE SAMBA SHARES VOLUME TO ANOTHER LOCATION ###
# check raid is ready and create a directory for samba shares and homes
mkdir -p /srv/raid/samba_shares
chown samba1:samba1 /srv/raid/samba_shares

Struggling there now :

# Stop samba pod
runagent -m samba1 podman stop samba-dc

# Copy existing shares and homes to new location - use rsync to conserve permissions
rsync -av /home/samba1/.local/share/containers/storage/volumes/shares/_data/ /srv/raid/samba_shares/

### POD OPERATIONS ###
# Remove existing samba shares volume
runagent -m samba1 podman volume rm shares

The first command stop samba-dc but nethtserver start it again immediatly after so when you throw the last command above you get this :

[root@ns8-leader archiwansamba]# runagent -m samba1 podman volume rm shares
Error: volume shares is being used by the following container(s): b12872cf11925a028055189208acbc313e32ed361f146b8f76d5fabba6aa1d30: volume is being used

How to force a full temporary stop of the samba-dc?

1 Like

The podman containers are controlled by systemd.

To stop the samba-dc:

runagent -m samba1 systemctl --user stop samba-dc

See also Howto manage or customize NS8 podman containers

EDIT:

It’s also possible to force the volume removal:

podman volume rm -f <volume>
2 Likes

Will try this afternoon ! Thanks.

I really want to make it the best way possible. I don’t have any data important data for the moment on the samba, but i’m sure there will be some people who will get this kind of situation.
What i plan to do is stop samba-dc, rsync is maybe better employed once the container is stopped, so rsync your new shares with the old one. mv or cp the old to a shares.backup maybe (before removing it). Then rm the old one and attach the new one, wich once done can open the place to starting the container and checking if everything is in place.

1 Like

Ok done finaly! I let there the script that sumsup the operations needed to change your samba share location.

### LIST SAMBA VOLUMES ###
runagent -m samba1 podman volume ls
DRIVER      VOLUME NAME
local       homes
local       shares
local       data
local       config
local       timescaledb
local       restic-cache
# Inspect shares volume
[samba1@ns8-leader ~]$ runagent -m samba1 podman inspect shares
[
     {
          "Name": "shares",
          "Driver": "local",
          "Mountpoint": "/home/samba1/.local/share/containers/storage/volumes/shares/_data",
          "CreatedAt": "2025-10-15T10:37:42.915729361+02:00",
          "Labels": {},
          "Scope": "local",
          "Options": {},
          "MountCount": 0,
          "NeedsCopyUp": true,
          "LockNumber": 2
     }
]

#identifier pod samba
[samba1@ns8-leader ~]$ runagent -m samba1 podman ps
CONTAINER ID  IMAGE                                        COMMAND     CREATED       STATUS       PORTS       NAMES
d7efd74b16af  docker.io/timescale/timescaledb:2.21.1-pg17  postgres    30 hours ago  Up 30 hours  5432/tcp    timescaledb
3c48d6120d4b  ghcr.io/nethserver/samba-dc:3.1.1                        30 hours ago  Up 30 hours              samba-dc

### OPERATIONS TO MOVE SAMBA SHARES VOLUME TO ANOTHER LOCATION ###
# check raid is ready and create a directory for samba shares and homes
mkdir -p /srv/raid/samba_shares
chown samba1:samba1 /srv/raid/samba_shares

# Stop samba pod
runagent -m samba1 systemctl --user stop samba-dc

# Copy existing shares and homes to new location - use rsync to conserve permissions
rsync -av /home/samba1/.local/share/containers/storage/volumes/shares/_data/ /srv/raid/samba_shares/

### POD OPERATIONS ###

# move or copy command example (not mandatory if you just want to remove the volume)
mv /home/samba1/.local/share/containers/storage/volumes/shares /my/new/Location/shares_backup
# Remove existing samba shares volume (!!! this will delete the volume and your existing data !!!)
runagent -m samba1 podman volume rm shares

# Recreate samba shares volume pointing to new location
runagent -m samba1 podman volume create --opt device=/srv/raid/samba_shares/ --opt type=bind --opt o=bind shares

# restart samba pod
runagent -m samba1 systemctl --user start samba-dc


Now i’m digging into samba acl.

Found your article discussing about getfacl and setfacl as i want to get more than one group to access with right the folders.
I wished i would get a gui for samba acl, but if it works that’s already cool…

2 Likes

Thanks for sharing your findings!

Samba ACLs are on the todo list, see Shared folder fine-grained ACL reset · Issue #7437 · NethServer/dev · GitHub

1 Like