I’m planning the migration of a quite big NS7 installed Nextcloud drive to NS8. I’ll report my questions success and failures in this thread
First question :
In the process I’d like to move the Nextcloud data directory to a separate HDD while keeping the app and database on a NVMe drive.
Would it be feasible and supported to use a bind mount from the host to the data volume inside the Podman container ?
That’s what GPT proposed me and while it looks not stupid I’m wondering if there are known issues or caveats with this approach, especially regarding container lifecycle, upgrades, or permissions?
Thanks in advance for any insights or experiences you can share!
The source machine currently uses a Domain Controller for accounts, and Nextcloud is configured to synchronize its internal accounts via LDAP.
In the past, I have encountered issues when migrating accounts that were assigned different UUIDs, resulting in a failure to sync with the existing accounts in the migrated Nextcloud instance.
Has this problem been resolved? How can I mitigate it?
@davidep what’s your opinion about this ? I’m afraid the migration tool will start the container by itself and that I’ll not have the occasion top setup the bind-mount ?
Otherwise I could migrate the nextcloud manually but that leads to the second question about the users UUID that will probably change. Is there a way to keep those UUIDs between machines ?
When you press the “Start migration” button for NS7 Nextcloud a NS8 Nextcloud app instance is created. Before the first Sync run you can remove the volume and re-create it manually with the bind-mount option. Just pay attention to preserve the original volume name, that’s all.
However consider that with the restore procedure it is not so trivial. In case of restore you’d need plan B, for example restore everything on the same device.
I suggest to rely on a simple, single (logical) device, at least until we implement an automated bind-mount volume management in the core. It’s not a priority but still an important goal, as announced in other threads.
You mean not bind-mounting ? In that case I should host the nextcloud on a mechanical drive. I’d like the DB to be hosted on a fast drive… Maybe I could do it the other way and customise the installation to point nextcloud to an external database stored on a fast disk
At first sight it looks like a smart catch, but the container image storage would then point to the slow disk and this may result in poor performance again. In the end I’d prefer the previous solution, with the bind-mount on the nextcloud-app-data volume only.
For the restore scenario, it would be possible to add a custom executable script that creates the volume and bind-mounts it here:
/usr/local/agent/actions/restore-module/
The custom script should run early, only if the app is nextcloud and bind-mount the nextcloud-app-data before it is created by the existing action step, 10restore.
Thanks. For the record @Andy_Wismer suggested to use an SMB External Storage but migrating to an external storage is not supported and one would loos all the shared links, versioning and everything.
I tried the migration and also to setup a new nextcloud-app-data volume.
During migration, the volume is in use by the rsync container so I stopped it to be able to remove the nextcloud volume and recreate using a bind-mount.
To get the command you could run podman volume inspect rsync-nextcloud3
But when finishing the Nextcloud migration, it seems to be stuck at 40%…
EDIT:
I tried to start nextcloud-app manually but got an error:
[nextcloud3@node state]$ systemctl --user enable nextcloud-app --now
Job for nextcloud-app.service failed because the control process exited with error code.
See "systemctl --user status nextcloud-app.service" and "journalctl --user -xeu nextcloud-app.service" for details.
I think the manual restart is not needed. Try to kill rsyncd, that will abort the import-module. Then you can adjust the module volume with the bind-mount. Finally launch Sync again from the migration-tool.
The import-module should be recovered because a NS8 node reboot is allowed during migration.
First of all : there is a typo here that should be podman inspect rsync-nextcloud3
I managed to get it working (thanks dear GPT too ), here are my notes :
# on NS8 : format the data drive in xfs and mount it using correct parameters for SELINUX (context=system_u:object_r:container_file_t:s0)
# in fstab :
UUID=xxx-xxx-xxx /mnt/nextcloudstorage auto nofail,defaults,context=system_u:object_r:container_file_t:s0 0 0
# Fix permissions if needed
chown root:root /mnt/nextcloudstorage
chmod 755 /mnt/nextcloudstorage # 777 needed ?
# Start the migration on the ns7 web ui and watch it do its thing on ns8.
# Get into the container to recreate the volume :
runagent -m nextcloudx
# rsync-nextcloud must be stopped : take note of the command line to be able to restart rsync-nextcloudx later (the password is changing)
podman inspect rsync-nextcloudx
# stopping rsync container first then rm
podman stop rsync-nextcloudx
podman volume rm nextcloud-app-data
# recreate the volume bind-mounted
podman volume create \
--opt type=none \
--opt device=/mnt/nextcloudstorage \
--opt o=bind \
nextcloud-app-data
# get out the container and apply selinux policies
exit
chcon -Rt container_file_t /mnt/nextcloudstorage
# Make it permanent
semanage fcontext -a -t container_file_t '/mnt/nextcloudstorage(/.*)?'
restorecon -Rv /mnt/nextcloudstorage # to be sure
# Get back in the container and restart rsync-nextcloudx using the parameters noted with podman inspect
runagent -m nextcloudx
podman run -d \
--rm \
--privileged \
--network=host \
--workdir=/srv \
--env RSYNCD_NETWORK=10.5.4.0/24 \
--env RSYNCD_ADDRESS=cluster-localnode \
--env RSYNCD_PORT=20002 \
--env RSYNCD_USER=nextcloudX \
--env RSYNCD_PASSWORD=changeme \
--env RSYNCD_SYSLOG_TAG=nextcloudX \
--volume /dev/log:/dev/log \
--replace \
--name rsync-nextcloudX \
--volume /home/nextcloudX/.config/state:/srv/state \
--volume nextcloud-app-data:/srv/volumes/nextcloud-app-data \
ghcr.io/nethserver/rsync:3.9.2
Then start the data sync.
I had to do that too.
Now the next step is to finish the migration. I get that error message after I click the button :