Migrating a nextcloud instance from NS7 to NS8 on a separate data volume : a journey

Hi all,

I’m planning the migration of a quite big NS7 installed Nextcloud drive to NS8. I’ll report my questions success and failures in this thread :blush:

First question :

In the process I’d like to move the Nextcloud data directory to a separate HDD while keeping the app and database on a NVMe drive.

Would it be feasible and supported to use a bind mount from the host to the data volume inside the Podman container ?

That’s what GPT proposed me and while it looks not stupid I’m wondering if there are known issues or caveats with this approach, especially regarding container lifecycle, upgrades, or permissions?

Thanks in advance for any insights or experiences you can share!

Your faithfully Pagaille

Second question ;

The source machine currently uses a Domain Controller for accounts, and Nextcloud is configured to synchronize its internal accounts via LDAP.

In the past, I have encountered issues when migrating accounts that were assigned different UUIDs, resulting in a failure to sync with the existing accounts in the migrated Nextcloud instance.

Has this problem been resolved? How can I mitigate it?

Yes, that should work using the nextcloud-app-data volume but I don’t know if it’s possible during the migration process.

1 Like

I’ll try !

@davidep what’s your opinion about this ? I’m afraid the migration tool will start the container by itself and that I’ll not have the occasion top setup the bind-mount ?

Otherwise I could migrate the nextcloud manually but that leads to the second question about the users UUID that will probably change. Is there a way to keep those UUIDs between machines ?

Is it some special custom configuration or just the way NS7 preconfigures Nextcloud when using external AD?

How did you migrate those accounts?

Is it about the Nextcloud UUID or the container uid mapping?

Yes standard config

Standard NS8 migration tool

You’re right : I was talking about the uids !

1 Like

I think I’ll experiment but the current VM is > 2 To and duplicating it for tests makes things difficult.

I’m going to test it.

If I understood correctly the NS7 is connected to an external AD account provider. Is it a Windows Server?

Nope, a local NS7 DC account provider.

1 Like

When you press the “Start migration” button for NS7 Nextcloud a NS8 Nextcloud app instance is created. Before the first Sync run you can remove the volume and re-create it manually with the bind-mount option. Just pay attention to preserve the original volume name, that’s all.

However consider that with the restore procedure it is not so trivial. In case of restore you’d need plan B, for example restore everything on the same device.

2 Likes

Thanks ! Never had to restore anything in 10 years :blush: I hope that will stay like this :wink:

You’re a lucky guy :smiley:

I suggest to rely on a simple, single (logical) device, at least until we implement an automated bind-mount volume management in the core. It’s not a priority but still an important goal, as announced in other threads.

You mean not bind-mounting ? In that case I should host the nextcloud on a mechanical drive. I’d like the DB to be hosted on a fast drive… Maybe I could do it the other way and customise the installation to point nextcloud to an external database stored on a fast disk :thinking:

Nope. Not tinkering too much and using raid :blush:

At first sight it looks like a smart catch, but the container image storage would then point to the slow disk and this may result in poor performance again. In the end I’d prefer the previous solution, with the bind-mount on the nextcloud-app-data volume only.

For the restore scenario, it would be possible to add a custom executable script that creates the volume and bind-mounts it here:

/usr/local/agent/actions/restore-module/

The custom script should run early, only if the app is nextcloud and bind-mount the nextcloud-app-data before it is created by the existing action step, 10restore.

2 Likes

Thanks. For the record @Andy_Wismer suggested to use an SMB External Storage but migrating to an external storage is not supported and one would loos all the shared links, versioning and everything.

I tried the migration and also to setup a new nextcloud-app-data volume.
During migration, the volume is in use by the rsync container so I stopped it to be able to remove the nextcloud volume and recreate using a bind-mount.

To start the rsync container again:

podman run --rm --detach --privileged \
  --network=host \
  --workdir=/srv \
  --env=RSYNCD_NETWORK=10.5.4.0/24 \
  --env=RSYNCD_ADDRESS=cluster-localnode \
  --env=RSYNCD_PORT=20002 \
  --env=RSYNCD_USER=nextcloud3 \
  --env=RSYNCD_PASSWORD=18841bf5bedbd-64c8-4518-942e-e314823b4f92 \
  --env=RSYNCD_SYSLOG_TAG=nextcloud3 \
  --volume=/dev/log:/dev/log \
  --replace \
  --name=rsync-nextcloud3 \
  --volume=/home/nextcloud3/.config/state:/srv/state \
  --volume=nextcloud-app-data:/srv/volumes/nextcloud-app-data \
  ghcr.io/nethserver/rsync:3.9.1

To get the command you could run podman volume inspect rsync-nextcloud3

But when finishing the Nextcloud migration, it seems to be stuck at 40%…

EDIT:

I tried to start nextcloud-app manually but got an error:

[nextcloud3@node state]$ systemctl --user enable nextcloud-app --now
Job for nextcloud-app.service failed because the control process exited with error code.
See "systemctl --user status nextcloud-app.service" and "journalctl --user -xeu nextcloud-app.service" for details.

I think the manual restart is not needed. Try to kill rsyncd, that will abort the import-module. Then you can adjust the module volume with the bind-mount. Finally launch Sync again from the migration-tool.

The import-module should be recovered because a NS8 node reboot is allowed during migration.

1 Like

It worked to sync the data but I got a “permission denied” error when finishing the NC migration and nextcloud-app doesn’t start.

2025-07-10T20:32:11+02:00 [1:nextcloud1:nextcloud-app] /entrypoint.sh: line 137: can't create /var/www/html/nextcloud-init-sync.lock: Permission denied
2025-07-10T20:32:11+02:00 [1:nextcloud1:podman] e2cb961d10eeaea1b59c0eab7fa2129467c504191d195f1a7bfc9de9b1aedcc6
2025-07-10T20:32:11+02:00 [1:nextcloud1:systemd] nextcloud-app.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

After disabling selinux for the nextcloud-app container it works.

This is just for testing, disabling security features isn’t a good practice, maybe there are other ways, I need to check…

To edit the service file: (just for testing, to make it persistent see SOGo new features: configuration template and access/configure button - #33 by mrmarkuz)

runagent -m nextcloud1 systemctl --user edit --full nextcloud-app

--security-opt label=disable needs to be added to the ExecStart=/usr/bin/podman run line:

ExecStart=/usr/bin/podman run --security-opt label=disable --conmon-pidfile ...

EDIT:

The Nextcloud UUIDs are correctly migrated to NS8.

1 Like

Working on this. I’m making progress !

First of all : there is a typo here :blush: that should be podman inspect rsync-nextcloud3

I managed to get it working (thanks dear GPT too :grimacing:), here are my notes :

# on NS8 : format the data drive in xfs and mount it using correct parameters for SELINUX (context=system_u:object_r:container_file_t:s0)

# in fstab :
UUID=xxx-xxx-xxx /mnt/nextcloudstorage auto nofail,defaults,context=system_u:object_r:container_file_t:s0 0 0

# Fix permissions if needed
chown root:root /mnt/nextcloudstorage
chmod 755 /mnt/nextcloudstorage # 777 needed ?

# Start the migration on the ns7 web ui and watch it do its thing on ns8. 

# Get into the container to recreate the volume : 
runagent -m nextcloudx

# rsync-nextcloud must be stopped : take note of the command line to be able to restart rsync-nextcloudx later (the password is changing)

podman inspect rsync-nextcloudx

# stopping rsync container first then rm
podman stop rsync-nextcloudx
podman volume rm nextcloud-app-data

# recreate the volume bind-mounted 
podman volume create \
  --opt type=none \
  --opt device=/mnt/nextcloudstorage \
  --opt o=bind \
  nextcloud-app-data

# get out the container and apply selinux policies
exit
chcon -Rt container_file_t /mnt/nextcloudstorage

# Make it permanent
semanage fcontext -a -t container_file_t '/mnt/nextcloudstorage(/.*)?'
restorecon -Rv /mnt/nextcloudstorage # to be sure

# Get back in the container and restart rsync-nextcloudx using the parameters noted with podman inspect

runagent -m nextcloudx

podman run -d  \
  --rm \
  --privileged \
  --network=host \
  --workdir=/srv \
  --env RSYNCD_NETWORK=10.5.4.0/24 \
  --env RSYNCD_ADDRESS=cluster-localnode \
  --env RSYNCD_PORT=20002 \
  --env RSYNCD_USER=nextcloudX \
  --env RSYNCD_PASSWORD=changeme \
  --env RSYNCD_SYSLOG_TAG=nextcloudX \
  --volume /dev/log:/dev/log \
  --replace \
  --name rsync-nextcloudX \
  --volume /home/nextcloudX/.config/state:/srv/state \
  --volume nextcloud-app-data:/srv/volumes/nextcloud-app-data \
  ghcr.io/nethserver/rsync:3.9.2

Then start the data sync.

I had to do that too.

Now the next step is to finish the migration. I get that error message after I click the button :

----------- finish nethserver-nextcloud Tue, 22 Jul 2025 12:22:35 +0200                                                                                                           │
dr-xr-xr-x              6 2025/02/14 00:04:11 .                                                                                                                                   │
<7>podman-pull-missing ghcr.io/nethserver/rsync:3.9.2                                                                                                                             │
Trying to pull ghcr.io/nethserver/rsync:3.9.2...                                                                                                                                  │
Getting image source signatures                                                                                                                                                   │
Copying blob sha256:b9c02ff59dad3a98ba4d3677088768e396e65819268591252696431d8f9410da                                                                                              │
Copying blob sha256:b0f6f1c319a1570f67352d490370f0aeb5c0e67a087baf2d5f301ad51ec18858                                                                                              │
Copying config sha256:932518f25ea8b2558da97bfd3caf5e7ce798d566e93788b5f097a9ff34205bfe                                                                                            │
Writing manifest to image destination                                                                                                                                             │
932518f25ea8b2558da97bfd3caf5e7ce798d566e93788b5f097a9ff34205bfe                                                                                                                  │
<7>podman volume create nextcloud-app-data                                                                                                                                        │
nextcloud-app-data                                                                                                                                                                │
<7>podman run --rm --privileged --network=host --workdir=/srv --env=RSYNCD_NETWORK=10.5.4.0/24 --env=RSYNCD_ADDRESS=cluster-localnode --env=RSYNCD_PORT=20002 --env=RSYNCD_USER=ne│
xtcloud3 --env=RSYNCD_PASSWORD=520115f4cfc8c-f93c-4e49-8c71-8c02c1bb32bb --env=RSYNCD_SYSLOG_TAG=nextcloud3 --volume=/dev/log:/dev/log --replace --name=rsync-nextcloud3 --volume=│
/home/nextcloud3/.config/state:/srv/state --volume=nextcloud-app-data:/srv/volumes/nextcloud-app-data ghcr.io/nethserver/rsync:3.9.2                                              │
""/usr/share/nethesis/nethserver-ns8-migration/apps/nethserver-nextcloud/migrate: ligne125: USER_DOMAIN : paramètre vide ou non défini   

I don’t get why the volume gets recreated when clicking on Finish ?
podman volume create nextcloud-app-data

1 Like