Ok my ns8 is composed of a 1Tb ssd on wich the system is installed. then i got a raid1 composed of three 2Tb HDD 5 2 sync and 1 spare) /dev/md0 mounted permanently in /srv/raid.
What i want is to get the samba shares and homes there. (then i’ll backup this to usb and offsite). It was hard to deal with NS8 since i’m coming from ns7 and before koozali and before e-smith and absolutly not routinly dealing with linux. Just using it for my office. I must say that IA is really helping in the domain.
I particurlarly struggle with understanding the way podman is structured and accessed. eg : All runagent commands does not work on ns8 (at least on mine) and the forum is full of it. until i understood that you need to log in the pod and then use podman command straight like this one
podman volume ls
Here is where am i today and i tried to sumup what i collected here and with the help of IA. Any comment on :
wether it will work.
whether this is the right way to do it
wether it’s persistent
Any omission?
### ACCESS to samba pod ###
su - samba1 # depends of your samba pod name
### LIST SAMBA VOLUMES ###
podman volume ls
DRIVER VOLUME NAME
local homes
local shares
local data
local config
local timescaledb
local restic-cache
# Inspect shares volume
[samba1@ns8-leader ~]$ podman volume inspect shares
[
{
"Name": "shares",
"Driver": "local",
"Mountpoint": "/home/samba1/.local/share/containers/storage/volumes/shares/_data",
"CreatedAt": "2025-10-15T10:37:42.915729361+02:00",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0,
"NeedsCopyUp": true,
"LockNumber": 2
}
]
#identifier pod samba
[samba1@ns8-leader ~]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7efd74b16af docker.io/timescale/timescaledb:2.21.1-pg17 postgres 30 hours ago Up 30 hours 5432/tcp timescaledb
3c48d6120d4b 30 hours ago Up 30 hours samba-dc
# Exit samba pod
exit
### ---OPERATIONS TO MOVE SAMBA SHARE VOLUME TO ANOTHER LOCATION--- ###
# check raid is ready and creat directory for samba shares and homes
mkdir -p /srv/raid/samba_shares
chown samba1:samba1 /srv/raid/samba_shares
mkdir -p /srv/raid/samba_homes
chown samba1:samba1 /srv/raid/samba_homes
# Stop samba pod
su - samba1
podman stop samba-dc
exit
# Copy existing shares and homes to new location - use rsync to conserve permissions
rsync -av /home/samba1/.local/share/containers/storage/volumes/shares/_data/ /srv/raid/samba_shares/
rsync -av /home/samba1/.local/share/containers/storage/volumes/homes/_data/ /srv/raid/samba_homes/
### POD OPERATIONS ###
# Log as samba 1 user
su - samba1
# Remove existing samba shares volume
podman volume rm shares
podman volume rm homes
# Recreate samba shares volume pointing to new location
podman volume create --opt device=/srv/raid/samba_shares --opt type=bind --opt o=bind shares
podman volume create --opt device=/srv/raid/samba_homes --opt type=bind --opt o=bind shares
# restart samba pod
podman start samba-dc
# Exit samba pod
exit
Thanks, i’ve read all thoses threads but none has given me a view of the full process and the impression that it was a validated solution, beside this runagent singualrity on my server.
I’ve a samba installed on my NS8 and workstation connected to it. So i don’t want to uninstall samba as it will imply to disconnect windows workstations, purge user and join them again wich on windows is long and tedious.
Also the day i can throw a ssd nas, i’ll move samba shares there and keep the HDD for heavy geodata wich don’t need to be accessed that quick.
As i understand homes should stay on the system disk due to initialisation?
If you’re logged in as admin, which is just a user (no root perms) on the system, you can get root by executing su - or if sudo is configured you can use sudo su -.
If you add the minus sign, (-) the PATH should be set correctly and the commands like runagent should just work.
[admin@ns8-leader ~]$ sudo su -
[sudo] Mot de passe de admin :
[root@ns8-leader ~]# runagent -m samba1 podman volume ls
DRIVER VOLUME NAME
local homes
local shares
local data
local config
local timescaledb
local restic-cache
Ok it’s working now that was the dash that i was missing when typin my command sudo su
### LIST SAMBA VOLUMES ###
runagent -m samba1 podman volume ls
DRIVER VOLUME NAME
local homes
local shares
local data
local config
local timescaledb
local restic-cache
# Inspect shares volume
[samba1@ns8-leader ~]$ runagent -m samba1 podman inspect shares
[
{
"Name": "shares",
"Driver": "local",
"Mountpoint": "/home/samba1/.local/share/containers/storage/volumes/shares/_data",
"CreatedAt": "2025-10-15T10:37:42.915729361+02:00",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0,
"NeedsCopyUp": true,
"LockNumber": 2
}
]
#identifier pod samba
[samba1@ns8-leader ~]$ runagent -m samba1 podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7efd74b16af docker.io/timescale/timescaledb:2.21.1-pg17 postgres 30 hours ago Up 30 hours 5432/tcp timescaledb
3c48d6120d4b 30 hours ago Up 30 hours samba-dc
### OPERATIONS TO MOVE SAMBA SHARES VOLUME TO ANOTHER LOCATION ###
# check raid is ready and create a directory for samba shares and homes
mkdir -p /srv/raid/samba_shares
chown samba1:samba1 /srv/raid/samba_shares
Struggling there now :
# Stop samba pod
runagent -m samba1 podman stop samba-dc
# Copy existing shares and homes to new location - use rsync to conserve permissions
rsync -av /home/samba1/.local/share/containers/storage/volumes/shares/_data/ /srv/raid/samba_shares/
### POD OPERATIONS ###
# Remove existing samba shares volume
runagent -m samba1 podman volume rm shares
The first command stop samba-dc but nethtserver start it again immediatly after so when you throw the last command above you get this :
[root@ns8-leader archiwansamba]# runagent -m samba1 podman volume rm shares
Error: volume shares is being used by the following container(s): b12872cf11925a028055189208acbc313e32ed361f146b8f76d5fabba6aa1d30: volume is being used
How to force a full temporary stop of the samba-dc?
I really want to make it the best way possible. I don’t have any data important data for the moment on the samba, but i’m sure there will be some people who will get this kind of situation.
What i plan to do is stop samba-dc, rsync is maybe better employed once the container is stopped, so rsync your new shares with the old one. mv or cp the old to a shares.backup maybe (before removing it). Then rm the old one and attach the new one, wich once done can open the place to starting the container and checking if everything is in place.
Ok done finaly! I let there the script that sumsup the operations needed to change your samba share location.
### LIST SAMBA VOLUMES ###
runagent -m samba1 podman volume ls
DRIVER VOLUME NAME
local homes
local shares
local data
local config
local timescaledb
local restic-cache
# Inspect shares volume
[samba1@ns8-leader ~]$ runagent -m samba1 podman inspect shares
[
{
"Name": "shares",
"Driver": "local",
"Mountpoint": "/home/samba1/.local/share/containers/storage/volumes/shares/_data",
"CreatedAt": "2025-10-15T10:37:42.915729361+02:00",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0,
"NeedsCopyUp": true,
"LockNumber": 2
}
]
#identifier pod samba
[samba1@ns8-leader ~]$ runagent -m samba1 podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7efd74b16af docker.io/timescale/timescaledb:2.21.1-pg17 postgres 30 hours ago Up 30 hours 5432/tcp timescaledb
3c48d6120d4b ghcr.io/nethserver/samba-dc:3.1.1 30 hours ago Up 30 hours samba-dc
### OPERATIONS TO MOVE SAMBA SHARES VOLUME TO ANOTHER LOCATION ###
# check raid is ready and create a directory for samba shares and homes
mkdir -p /srv/raid/samba_shares
chown samba1:samba1 /srv/raid/samba_shares
# Stop samba pod
runagent -m samba1 systemctl --user stop samba-dc
# Copy existing shares and homes to new location - use rsync to conserve permissions
rsync -av /home/samba1/.local/share/containers/storage/volumes/shares/_data/ /srv/raid/samba_shares/
### POD OPERATIONS ###
# move or copy command example (not mandatory if you just want to remove the volume)
mv /home/samba1/.local/share/containers/storage/volumes/shares /my/new/Location/shares_backup
# Remove existing samba shares volume (!!! this will delete the volume and your existing data !!!)
runagent -m samba1 podman volume rm shares
# Recreate samba shares volume pointing to new location
runagent -m samba1 podman volume create --opt device=/srv/raid/samba_shares/ --opt type=bind --opt o=bind shares
# restart samba pod
runagent -m samba1 systemctl --user start samba-dc
Now i’m digging into samba acl.
Found your article discussing about getfacl and setfacl as i want to get more than one group to access with right the folders.
I wished i would get a gui for samba acl, but if it works that’s already cool…