NS Samba Shares on External USB Disk

NethServer Version: NS8
Module: Samba with AD

So I have installed Samba with AD and I have defined my users and groups.

I have an external USB disk with files on it and I have successfully mounted it to

/home/samba1/.local/share/containers/storage/volumes/shares/_data

and I can see the files in the terminal.

In the GUI, I create a share, making sure it matches the name of one of the existing folders and set the desired permissions.

When I list the files in the terminal session, I can see the Group is called samba1 and the User is a set of numbers (which I assume is expected as the terminal doesn’t do the LDAP translation from UID to the “real” username).

The issue is that when I browse to the Server using Fedora Files, I can see the share and when I go to the Share, there are no files listed.

Fairly sure I am missing something obvious, but I’m not sure what yet.

Any guidance or advise will be welcome.

Did you already try to (re)set the permissions for the share? See also File server — NS8 documentation

grafik

Good suggestion, I did try that and it did not work.

I get the sneaking suspicion that something else is happening.

So I decided to do a test and ensured that the mount I had made before was still unplace.

I then created a new share called “testing”, expecting the folder to appear on the USB Disk. I did not see the folder underneath “/home/samba1/.local/share/containers/storage/volumes/shares/_data”.

I then unmounted the USB Disk, performed an “ls” of “/home/samba1/.local/share/containers/storage/volumes/shares/_data” and found that the folder “/home/samba1/.local/share/containers/storage/volumes/shares/_data/testing” existed.

So I am confused as to how this is occuring.

Could the Samba container mounting the path on disk which is somehow bypassing the mount that I put in place?

Apologies for all of the newbie questions, this is the first time I am really using NS8 in anger and I’m still figuring out how some of the config and things work under the covers…

Just discovered two things:

  1. I suspect what is happening in my earlier tests is that the Samba Container is still linked to the file system pointer for the old location and not the new location as specified in the fstab mount
  2. When I have the fstab entry, samba-dc chokes on a server restart.

When mounting any block device into “/home/samba1/.local/share/containers/storage/volumes/shares/_data” using /etc/fstab when the server is running is fine and the mount happens fine.

The problem comes in when the server reboots.

The disk mount occurs, but the sambda-dc service chokes for some reason and partially falls over. So although the server is aware that there is x number of users and y number of groups, it is unable to list the users and groups and you can imagine what else can break because of that…

The only thing I can think of doing is mounting the USB Disk to somewhere like “/mnt/data-disk” and telling the Samba configuration to look at “/mnt/data-disk” for the shares.

I’m still struggling to find all of the right log files to look at and where all of the appropriate configs are so I am currently left scratching my head right now.

Anyone got any good ideas, thoughts or suggestions?

@mrmarkuz Just wondering if you might have any more thoughts or ideas on this subject?

Sorry for the late answer…

Here customizing Samba is explained: (include.conf or net conf setparm)

The NS8 System logs page should give some insights. You could choose your samba instance “samba*” to just show the samba logs.

Maybe you need to enable debug logging on the samba-dc to get more information, see Setting the Samba Log Level - SambaWiki

Hi @bwdjames James,

have you found your solution for "the Samba configuration to look at “/mnt/data-disk”? If yes, I hope you can share it!

Thx
maddin

No, unfortunately not yet @maddin . I had to switch to other higher priority items.

Hello,

I used to make use of mounting other drives in certain Nethserver folders under Nethserver 7.
On the one hand because of the size (who wants to have terabytes of shared data on an expensive SSD) and on the other hand depending on the desired access speed (databases feel more comfortable on certain drives than on others). I also hoped that in the event of a pure system error (update problem?) I would not have to deal with a mega drive, but perhaps only with the system drive (the others would not be directly affected).

With NETH8 I would of course have the same need and also thought that I could solve it in the same way. At the moment it is actually a showstopper for me and I cannot carry out the upgrade (the move) because I don’t know how to do it reliably(!).

In principle I know how to do it and have already prepared and tested a few things - specifically with the mail folders and the Samba shares. I will be happy to share these with you below.

What I unfortunately don’t know - and unfortunately I haven’t been given any help here - are the following things:

  1. How do I reliably stop a container - in order to move its data and remount the new storage location in the path expected by NETH8.

  2. (for my specific case) How do I pause the Nethserver7 >>> NETH8 migration process directly after creating the container paths (they are still missing before the migration) in order to integrate my additional data storage devices into the paths during this pause. I somehow don’t see any point in first gluing together a huge(!) single drive using LVM if the service data (container data) will (or should) be located somewhere else anyway.

  3. How is the process of container updates or container upgrades handled in concrete terms under NETH8? Are any files temporarily moved or paths renamed? So that the folders/paths to be moved don’t fit on the drive in the base path, which is far too small? If my path in the case of Samba is: “/home/samba1/.local/share/containers/storage/volumes”
    and “volumes” is actually my additional drive, what will happen if during an update or upgrade “/home/samba1/” temporarily becomes “/home/samba1-tmp/” or something similar? I don’t want to imagine it. Then it will obviously go completely wrong.

It would of course also cause problems if the container were suddenly “renamed” or renumbered, because this would change the folder name. Then the mounted path would also “burst”.

The only thing that would be “safe” is to mount a drive in the entire “/home/” directory. Because nothing will happen outside of the container. But that doesn’t make much sense, because under /home/ there are containers of very different service types (both files and databases), each of which would prefer different drive types and storage sizes.

If the path were not “/home/samba1/” but rather “/home/samba/1/”, you could mount suitable drives, not with certainty on a case-by-case basis, but at least on a service-by-service basis. Then you could, for example, mount a mega drive to /home/samba/ and a special SSD to /home/sql-something".

But the developers have not thought of such cases - and they are apparently not the only ones. I have spoken to various people from the professional environment and they all complain about various limitations caused by their various container orchestrators. Linux thrives on sooo many possibilities for implementing things, but in the container warehouse it degenerates into a pure launch platform - both for the orchestrator and the container itself. Will this have a positive effect on the future of Linux systems? I am not so optimistic.

At the moment my “single point of failure” is CentOS7 - will it be called “Podman” in the future?

1 Like

But now to my version, even if it is quite banal.

Note:
The information is without guarantee. It works for me, but something may have been changed during translation

For reasons of laziness, I like to do this using variables and command chains.

And I start with "

sudo su

", so be careful!

My variables are e.g. (more will come later)

The NETH8 partition

ORIG_PART=sdb1

the name for the previous NETH8 paths (used for the comment in /etc/fstab and the mount point)

NETH_MOUNT=neth8_root

Create a mount point for the previous NETH8 paths

mkdir /mnt/${NETH_MOUNT}

First, I mount the boot drive (with the NETH8 base path) in another path - this way I still have access to the paths that will later be “overmounted”.

mount alternative mount points for originals in /etc/fstab

echo “# Alternative mount point for NETH8 PATHS ${ORIG_PART}” | sudo tee -a /etc/fstab

echo “$(cat /etc/fstab | grep -A 1 ${ORIG_PART} | grep “UUID” | sed “s|/|/mnt/${NETH_MOUNT}|”)” | sudo tee -a /etc/fstab

View fstab

cat /etc/fstab

mount the new mount points without rebooting
this is harmless because no existing mounts have been changed

systemctl daemon-reload
mount -a

check mounting with:

findmnt -t none -o TARGET,SOURCE,FSTYPE,PROPAGATION

Is there anything to see?

ls -al /mnt/${NETH_MOUNT}

prepare new disk. I don’t like to explain this here, too much can go wrong. I’m sure you know this yourself anyway (like most people here, but maybe it will help someone)

my new disk

NEW_DISK=sdc

the partition on my new disk

PART=${NEW_DISK}1

permanently mount the new disk in /etc/fstab:

echo “# Own mount point for PARTITION /mnt/${PART}” | sudo tee -a /etc/fstab

echo “UUID=$(blkid /dev/${PART} | awk -F’"’ ‘{print $2}’) /mnt/${PART} ext4 defaults 0 2” | sudo tee -a /etc/fstab

view fstab

cat /etc/fstab

mount the new mount points without rebooting. this is harmless because no existing mounts have been changed

systemctl daemon-reload
mount -a

check the mount with:

findmnt -t none -o TARGET,SOURCE,FSTYPE,PROPAGATION | grep ${PART} | sed ‘s/ */ /g’

Is there anything to see?

ls -al /mnt/${PART}

My drives and mounts are therefore primarily under /mnt/, and they stay there. This way I don’t lose track of things and I can easily exclude this path for rsync backups (I don’t want everything twice). The mount bind to the NETH8 paths will only be done later.

Now we can transfer the data. I would prefer to stop the container here, without stopping you have to prevent users from using it in some other way

I would like to transfer the following data / paths
Path to the desired folder in the samba1 container

SOURCE=/home/samba1/.local/share/containers/storage/volumes

Define destination for the container data (new disk)

TARGET=/mnt/${PART}

Create complete path for container data - Preview

echo ${TARGET}${SOURCE}

Note: The entire path to the target folder on the additional drive is created here. It basically corresponds to the original path. This keeps the option open to include other folders on the same drive in the same path. For example, the user folders or certain shares. Or other container volumes. In the example, however, I have “everything”.

mkdir -p ${TARGET}${SOURCE}

the folders obviously do not have the correct permissions. This should be corrected with the following “rsync” of the files (I think)

Sync source files in path and adopt permissions

rsync -a $SOURCE/* ${TARGET}${SOURCE}

take a look

ls- al ${TARGET}${SOURCE}

now it’s getting exciting. If everything is there, you could mount the bind mounts temporarily (not reboot-proof)

Mount bind mounts temporarily

mount --bind ${TARGET}${SOURCE} $SOURCE

or mount them in /etc/fstab so they are reboot-proof
Mount bind mount permanently in /etc/fstab:

echo “# Own bind mount for CONTAINER DATA ${TARGET}” | sudo tee -a /etc/fstab
echo “${TARGET}${SOURCE} $SOURCE none bind 0 0” | sudo tee -a /etc/fstab

view fstab

cat /etc/fstab

mount mountpoints without restart

systemctl daemon-reload
mount -a

check integration with:

findmnt -t none -o TARGET,SOURCE,FSTYPE,PROPAGATION | grep $SOURCE | sed ‘s/ */ /g’

in my attempts, integration into the containers worked directly, but without stopping/starting the affected containers, I would recommend a restart.

This is my approach, for a trial comparison for all others who are there

regards
yummiweb

1 Like

The containers are controlled by systemd services, see also Systemd units | NS8 dev manual
There are rootfull modules that use system services and rootless apps that use user services see also Rootless vs Rootfull | NS8 dev manual

In the following example the Nextcloud containers for instance nextcloud1 are stopped by stopping the service.
For rootfull modules avoid the “–user” option.

Enter nextcloud1 environment:

runagent -m nextcloud1

Check running containers:

podman ps

Stop the service:

systemctl --user stop nextcloud

You could do it in one line:

runagent -m nextcloud1 systemctl --user stop nextcloud

If there are still running containers you could check the active user services:

runagent -m systemctl --user list-units --type=service --state=active

Did you already try to integrate your storage devices after starting a specific app migration and before doing the “Sync”?

AFAIK the container updates are inside /home/<instance_name>/ so that shouldn’t be an issue.

Hi @yummiweb

One possible solution to large data volumes (you do not want on a single disk/volume…) I came upon when migrating my clients servers:

  • Remove the bulk of the migration data, by moving the contents temporarily to another place (NAS, USB Disk, whatever).
  • Maybe create an alibi folder with 1-2 files for checking.
  • Migrate the data, will be much less error prone if much less data and much faster.
  • Restore the data contents AFTER sucessful migration, and prepping the storage location as intended / needed.
  • Then set permissions as needed.

This worked well for me, especially with very large samba / mail installations, or separated data volumes…

Hope this helps.

My 2 cents
Andy

1 Like

Thank you very much! That was very helpful information.

The fact that updates take place within the container instance is very good information. That should ensure that a mounting in e.g. /home/service1 (i.e. not in a subfolder) is safe - or not?

That would at least alleviate the current space issue a little.

However, this would mean that “system data” (the system service itself, its settings and the service settings) would be stored together with the service data. But I would think that a separation would be very useful here. Even the Podman developers probably thought that was useful, because Podman also separates system data and service data from each other (~/Volumes). Of course, I don’t know whether the Podman developers thought of integrating other drives into this path (or deeper). Technically speaking, this separation would be exactly the right entry point.

In order to use this option safely, you obviously need to know how updates to the container or container app specifically affect the path structure (temporarily) during container or app updates - because a change to the path = the previous mount path is invalid. Maybe I’m worrying too much here and the path structure (including the internal one) never changes with such updates anyway? Who could give me certain information on this?

When moving nodes within a cluster or during a restore, it is very likely that a new instance number will be assigned. I noticed this when reinstalling a container (via GUI) that I had previously deleted (GUI). That’s what made me aware of this problem in the first place. Because a new instance number = change to the path = mount path becomes invalid. This means that you would always have to adjust the mount paths after(!) creating a new container instance (or a restore?). That would be annoying at first. Here I had the idea of ​​creating another mount (fictitious) on e.g. “/home/service2” parallel to “/home/service1”, but NETH8 will probably see the folder and spontaneously decide on “/home/service3” - right?

That is also my fear or the obstacle to creating a mount to “/home/service1” when installing the basic system. In order for the mount to be active during the subsequent container installation (and not aborted due to a lack of a target folder), the directory “/home/service1” must already exist. Would NETH8 then use this for the first instance of “service” or create the new instance “/home/service2”?

If that is the case, all built-in restore procedures are probably not available - or would they use “/home/service1” if it already exists?

Is there any way to avoid this (built-in) sequence theater? Can the check for previous instances (or their folders) be switched off if necessary?

It would be reasonably practical if you would have been affected by the sequencing that would not have been affected by the sequencing in order to mount a suitable drive there in general. This could be done. Unfortunately, the only common basic path for all (rootless) service is the same “/home”. All (!) Other containers come up with it, no matter which instance. Although these have a service -specific instance number, they are all in the same directory. The “order” of identical service types is therefore chosen via the name (with numbering), not over the path. “/home/service1”, “/home/myservice1”.

“/home” is far too unspecific if you want to assign a suitable data carrier (HD, SSD or even streamer) for each service type (as you should actually do).

Therefore, I would like to repeat myself again that I would consider it very useful if every service type had a common (specific) basic path such as: “/home/service/1”, “/home/service/2”, " /home/myservice/1 "

This would make at least a “rough” integration of suitable drive types according to the service type - and probably also harmless even when moving between nodes or the restore or when reinstalling formerly deleted containers.

Basically, one would also wish that - as usual with Linux - you can also mount directly into a deep folder structure with a container orchestrator without violating its basic concept (move/restore). Because the choice of the drive type can depend not only on the data type but also the special use. I would like to have the most mail folders on an average SSD, for example, but certain mail folders such as an archive (with rare access), I would like to be on a slow but large HD - or even a streamer (not tested but conceivable). The same should apply to certain SMB shares.

Small swipe to the developers - otherwise I very much appreciate:
With the NethServer 7, all of this was still “out of the box”. Already during installation the basic system could be roughly mounted in the respective paths:/var/lib/nethserver, /var/lib/nethserver/ibay, /var/lib/nethserver/vmail etc.

This is not predictable under NETH8 (for me) - except the instance number could be predefinable or fixable - which may also be a way to adjust this for future versions.

Of course, I don’t know to what extent it is realistic to get something of it in the future. Adjusting the administrative code to adapt to this does the greatest work.

And I couldn’t even say whether I could look forward to such an upgrade later - if I had “bent” my system myself until then. Because as long as everything is on a drive, such an upgrade would probably be done with a few hard links and a code -update. If the hard links cannot work because they are different drives - then when upgrading, what I fear would now happen.

Please excuse my long versions - these are not the first on the subject - but I am still looking intensively for solutions because without a solution you cannot move a single NethServer7 to Neth8.

Greetings Yummiweb

Dear Andy,

thank you very much for your suggestion. I’m afraid that I will have to do it - but that’s anything but practical.

That would be so much manual work that I could also set up a naked NETH8 or any other solution (or solutions) in order to re -set all the services there and to record the required old data.

In view of the detailed problem, however, I wonder whether NETH8 (already) is the vicinity for this effort - because it must continue to work stable after the move (and with my adjustments). It used to do that - today there are still a lot of questions.

And at the moment, Neth8 is actually much more puzzling for me than the individual services from Nethserver 7, the most I know the most away from the old e-smith configuration and I see more help on the network on these services than on the internal functions of NETH8. Yes, there are the dev-manuals but I am not a developer but rather user/system admin (who also likes to script). So I am not sure whether - for me - the learning effort would not be better invested in the individual services than in the internal functions of NETH8 outside the GUI and away from the intended paths.

Unfortunately, I need the existing Samba AD(s) without any functional restriction and without larger adaptation time - also for the Macs and the Windows machines because their user folder are integrated in it. Here is probably only a move to Neth8 in question if I don’t want to be blocked and frustrated until Christmas.

But the Samba Shares are also connected to the Samba AD - and with that we would be back on the question of space.

I dare to do many services separately, be it in Proxmox system containers or virtual machines. But the collaboration worries me - not to mention maintenance. Separate web server, Nextclouds, DB server, Frepbx - no problem, you either don’t need a user integration or the AD is only connected to authentication, which has so far been done quite well and Freepbx and NextCloud have been more stable for me (was the Reason for the separation).

SOGo also has to work perfectly for example - even with every new user - without making a manual adjustment every time. This also generally applies to all new users in the AD and in particular the mail users and mail integration. To put on a mail server is not rocket science, but in order to manage it in the long run, a good GUI is practically practical. And the user administration (including AD, Mail and Sogo) was already fantastic with Nethserver 7.

That is the reason for my relative frustration that the actually very good Neth8 resists my “own” scenarios. And I would like to have a little more manual intervention options (containers) about the respective service configs. This should be “standard” in the Container /etc, but it is still not clear to me what would be overwritten by changes within the GUI. With Nethserver 7, this was clearly regulated over the e-smith. At NETH8 I don’t even find the Postfix - is it now in the (respective) Dovecot or another?

Probably my mistake is that I hardly read the documentary or read in an early stage. I am not looking for what I did not find back then every day.

Greetings Yummiweb

Hi @yummiweb

Thanks for the longer feedback.

If it helps, I can confirm that AD on NS8 works extremly well, so does Webmail, Mail and also Nextcloud (Using AD/Samba shares and shares from an OpenMediaVault NAS / Synology NAS).

For some of my clients, especially “doctors practices”, the Windows software they used was licensed by AD Domain Name, MAC Adress and Server Name, forcing me to retain the AD. :frowning: This would have triggered a new installation by the doctors software providers, a 10K “problem” for me and my clients…

What caused me (my clients) the most issues was the FileServer. Migration worked, but:

Wrong default permissions for a group share (650) User can read/write - but not execute (Problems with PDF), others can only read. Even though on NS8, the permissions is set to
Owner R/W/E
Group R/W/E
Others None
These permissions are not what’s expected for a group share. The issue is the difference between UNIX Permissions and ACL Permissions.

If you copy the files afterwards with a Windows Client (As Domain Admin), you will have much less issues.

That’s why I had to redo the fileserver / contents…

I used the trick I mentionned for 4 larger clients, also to split up some shares to NAS. The VM of NS7 was over 1.5 TB…
Backup to NAS took over 4 hours. Backup with PBS took maybe 5 minutes if incremental, after a reboot a full backup would take 1.5 hours.
Now a Backup to NAS takes 1 hour, a PBS backup will take less than 5 minutes!

This worked quite well.

As to the doing…
Move the folder contents of the iBays eg to /AAA/ibay-name.
As this is a move, this is extremly fast.
The data is still on NS7 disk, and can be moved back later to the share (You could use a WinPC with TotalCommander - or SSH/SCP/Rsync from Linux to move your stuff where you want them!

My 2 cents
Andy

1 Like