Backup on USB-HD fail

Netserver 8,

we use since years USB-HD’s to backup Nethserver. (one at the machine, two at home). But since Update at ~05.08.2025 there is an alert for backup ! We could mount the USB-HD and see older backups, and its enough space left on it. A start by hand give this error list:

Task module/crowdsec1/run-backup run failed: {'output': '', 'error': 'restic snapshots\nrclone: 2025/08/11 11:13:40 CRITICAL: Failed to create file system for ":webdav:/crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f": read metadata failed: Propfind "http://10.5.4.1:4694/crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f\nrestic init\nrclone: 2025/08/11 11:14:00 CRITICAL: Failed to create file system for ":webdav:/crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f": read metadata failed: Propfind "http://10.5.4.1:4694/crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f failed: Fatal: unable to open repository at rclone::webdav:/crowdsec/f561a2ea-52dd-4daf-a862-6d755121783f: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-crowdsec1-1452421\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/mail1/run-backup run failed: {'output': 'Dumping Mail state to disk:\nSaving Maildir IMAP folder index:\nSaving service status:\n', 'error': 'Error: crun: setns(pid=3148, CLONE_NEWUSER): Operation not permitted: OCI permission denied\nrestic snapshots\nrclone: 2025/08/11 11:13:43 CRITICAL: Failed to create file system for ":webdav:/mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f": read metadata failed: Propfind "http://10.5.4.1:4694/mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f\nrestic init\nrclone: 2025/08/11 11:14:00 CRITICAL: Failed to create file system for ":webdav:/mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f": read metadata failed: Propfind "http://10.5.4.1:4694/mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f failed: Fatal: unable to open repository at rclone::webdav:/mail/159e4824-fc0d-427c-bfb5-84d8172a6d4f: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-mail1-1452433\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/imapsync1/run-backup run failed: {'output': '', 'error': 'restic snapshots\nrclone: 2025/08/11 11:13:38 CRITICAL: Failed to create file system for ":webdav:/imapsync/209e1bde-cb38-4456-8b71-68102013a203": read metadata failed: Propfind "http://10.5.4.1:4694/imapsync/209e1bde-cb38-4456-8b71-68102013a203": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/imapsync/209e1bde-cb38-4456-8b71-68102013a203: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path imapsync/209e1bde-cb38-4456-8b71-68102013a203\nrestic init\nrclone: 2025/08/11 11:13:58 CRITICAL: Failed to create file system for ":webdav:/imapsync/209e1bde-cb38-4456-8b71-68102013a203": read metadata failed: Propfind "http://10.5.4.1:4694/imapsync/209e1bde-cb38-4456-8b71-68102013a203": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/imapsync/209e1bde-cb38-4456-8b71-68102013a203 failed: Fatal: unable to open repository at rclone::webdav:/imapsync/209e1bde-cb38-4456-8b71-68102013a203: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-imapsync1-1452423\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/openldap1/run-backup run failed: {'output': 'Dumping state to LDIF files:\n', 'error': 'restic snapshots\nrclone: 2025/08/11 11:13:42 CRITICAL: Failed to create file system for ":webdav:/openldap/b2878c23-bf64-4f7c-894e-8dad7a984243": read metadata failed: Propfind "http://10.5.4.1:4694/openldap/b2878c23-bf64-4f7c-894e-8dad7a984243": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/openldap/b2878c23-bf64-4f7c-894e-8dad7a984243: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path openldap/b2878c23-bf64-4f7c-894e-8dad7a984243\nrestic init\nrclone: 2025/08/11 11:14:00 CRITICAL: Failed to create file system for ":webdav:/openldap/b2878c23-bf64-4f7c-894e-8dad7a984243": read metadata failed: Propfind "http://10.5.4.1:4694/openldap/b2878c23-bf64-4f7c-894e-8dad7a984243": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/openldap/b2878c23-bf64-4f7c-894e-8dad7a984243 failed: Fatal: unable to open repository at rclone::webdav:/openldap/b2878c23-bf64-4f7c-894e-8dad7a984243: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-openldap1-1452422\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/sogo1/run-backup run failed: {'output': '', 'error': 'restic snapshots\nrclone: 2025/08/11 11:13:42 CRITICAL: Failed to create file system for ":webdav:/sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf": read metadata failed: Propfind "http://10.5.4.1:4694/sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf\nrestic init\nrclone: 2025/08/11 11:14:00 CRITICAL: Failed to create file system for ":webdav:/sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf": read metadata failed: Propfind "http://10.5.4.1:4694/sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf failed: Fatal: unable to open repository at rclone::webdav:/sogo/b1530faa-9b71-46eb-b94d-a3d1292d0dcf: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-sogo1-1452427\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/loki1/run-backup run failed: {'output': '', 'error': 'restic snapshots\nrclone: 2025/08/11 11:13:39 CRITICAL: Failed to create file system for ":webdav:/loki/cc18d93b-fc25-478f-88a2-240f1a1bd292": read metadata failed: Propfind "http://10.5.4.1:4694/loki/cc18d93b-fc25-478f-88a2-240f1a1bd292": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/loki/cc18d93b-fc25-478f-88a2-240f1a1bd292: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path loki/cc18d93b-fc25-478f-88a2-240f1a1bd292\nrestic init\nrclone: 2025/08/11 11:14:00 CRITICAL: Failed to create file system for ":webdav:/loki/cc18d93b-fc25-478f-88a2-240f1a1bd292": read metadata failed: Propfind "http://10.5.4.1:4694/loki/cc18d93b-fc25-478f-88a2-240f1a1bd292": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/loki/cc18d93b-fc25-478f-88a2-240f1a1bd292 failed: Fatal: unable to open repository at rclone::webdav:/loki/cc18d93b-fc25-478f-88a2-240f1a1bd292: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-loki1-1452424\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/traefik1/run-backup run failed: {'output': "mkdir: Verzeichnis 'state-backup' angelegt\n'traefik.yaml' -> 'state-backup/traefik.yaml'\n'configs' -> 'state-backup/configs'\n'configs/pasw.yml' -> 'state-backup/configs/pasw.yml'\n'configs/cluster-admin.yml' -> 'state-backup/configs/cluster-admin.yml'\n'configs/3CX.yml' -> 'state-backup/configs/3CX.yml'\n'configs/mail1-rspamd.yml' -> 'state-backup/configs/mail1-rspamd.yml'\n'configs/sogo1.yml' -> 'state-backup/configs/sogo1.yml'\n'configs/_http2https.yml' -> 'state-backup/configs/_http2https.yml'\n'configs/esweb.yml' -> 'state-backup/configs/esweb.yml'\n'configs/_default_cert.yml' -> 'state-backup/configs/_default_cert.yml'\n'configs/bma-cloud.yml' -> 'state-backup/configs/bma-cloud.yml'\n'configs/nextcloud2.yml' -> 'state-backup/configs/nextcloud2.yml'\n'configs/openldap1-amld.yml' -> 'state-backup/configs/openldap1-amld.yml'\n'configs/_api.yml' -> 'state-backup/configs/_api.yml'\n'manual_flags' -> 'state-backup/manual_flags'\n'manual_flags/pasw' -> 'state-backup/manual_flags/pasw'\n'manual_flags/esweb' -> 'state-backup/manual_flags/esweb'\n'manual_flags/bma-cloud' -> 'state-backup/manual_flags/bma-cloud'\n'manual_flags/3CX' -> 'state-backup/manual_flags/3CX'\n'custom_certificates' -> 'state-backup/custom_certificates'\n'acme' -> 'state-backup/acme'\n'acme/acme.json.acmejson-notify' -> 'state-backup/acme/acme.json.acmejson-notify'\n'acme/acme.json' -> 'state-backup/acme/acme.json'\n", 'error': 'restic snapshots\nrclone: 2025/08/11 11:13:38 CRITICAL: Failed to create file system for ":webdav:/traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7": read metadata failed: Propfind "http://10.5.4.1:4694/traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7\nrestic init\nrclone: 2025/08/11 11:13:58 CRITICAL: Failed to create file system for ":webdav:/traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7": read metadata failed: Propfind "http://10.5.4.1:4694/traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7 failed: Fatal: unable to open repository at rclone::webdav:/traefik/198d25f3-bc54-428d-84d9-2f7ed5796bb7: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-traefik1-1452426\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/nextcloud2/run-backup run failed: {'output': '', 'error': 'restic snapshots\nTrying to pull ghcr.io/nethserver/restic:3.9.2...\nGetting image source signatures\nCopying blob sha256:b0f6f1c319a1570f67352d490370f0aeb5c0e67a087baf2d5f301ad51ec18858\nCopying blob sha256:2246b04badcba6c8a7d16e25fade69c25c34ce7d8ff8726511b2d85121150216\nCopying config sha256:363b1008aad8b214eac80680f5f3a721137c9465f0bd9c47548a894bf8a99ec0\nWriting manifest to image destination\nStoring signatures\nrclone: 2025/08/11 11:14:39 CRITICAL: Failed to create file system for ":webdav:/nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4": read metadata failed: Propfind "http://10.5.4.1:4694/nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4\nrestic init\nrclone: 2025/08/11 11:14:45 CRITICAL: Failed to create file system for ":webdav:/nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4": read metadata failed: Propfind "http://10.5.4.1:4694/nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4 failed: Fatal: unable to open repository at rclone::webdav:/nextcloud/c76251d2-b877-46a2-b1b9-d82b149407d4: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-nextcloud2-1452437\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
Task module/dnsmasq1/run-backup run failed: {'output': '', 'error': 'restic snapshots\nrclone: 2025/08/11 11:13:36 CRITICAL: Failed to create file system for ":webdav:/dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3": read metadata failed: Propfind "http://10.5.4.1:4694/dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: unable to open repository at rclone::webdav:/dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3: error talking HTTP to rclone: exit status 1\nInitializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3\nrestic init\nrclone: 2025/08/11 11:13:43 CRITICAL: Failed to create file system for ":webdav:/dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3": read metadata failed: Propfind "http://10.5.4.1:4694/dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3": dial tcp 10.5.4.1:4694: connect: connection refused\nFatal: create repository at rclone::webdav:/dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3 failed: Fatal: unable to open repository at rclone::webdav:/dnsmasq/dbf1ae4b-edb6-44aa-b9ac-c3ed6d1d46a3: error talking HTTP to rclone: exit status 1\n\n[ERROR] restic init failed. Command \'[\'podman\', \'run\', \'-i\', \'--rm\', \'--name=restic-dnsmasq1-1452425\', \'--privileged\', \'--network=host\', \'--volume=restic-cache:/var/cache/restic\', \'--log-driver=none\', \'-e\', \'RESTIC_PASSWORD\', \'-e\', \'RESTIC_CACHE_DIR\', \'-e\', \'RESTIC_REPOSITORY\', \'-e\', \'RCLONE_WEBDAV_URL\', \'ghcr.io/nethserver/restic:3.9.2\', \'init\']\' returned non-zero exit status 1.\n', 'exit_code': 1}
9

What is wrong ? Howto find the reason ?

It seems the target of the backup is not accessible. Did you try to restart the service that mounts the USB disk?

systemctl restart rclone-webdav.service

Let’s check the status:

systemctl status rclone-webdav.service -l

Are you rotating the disks between home and company? I think if the disk has changed you need to resetup the local volume.

Did you already check for hardware errors? (smart)

Maybe the disk has some bad blocks? (adapt sda to the USB disk)

badblocks -v /dev/sda -s

Please check if the USB harddisk is setup correctly as explained in Backup and restore — NS8 documentation :

To show the disks information including UUIDs:

lsblk -f

Let’s list the volumes to check if a custom volume like volume00 was created:

podman volume ls

Check if the custom volume is there:

grep BACKUP_VOLUME /var/lib/nethserver/node/state/rclone-webdav.env

Inspect the custom volume to check the mountpoint:

podman volume inspect volume00

EDIT:

Maybe related:

1 Like

Good morning,
we configured Backup to USB like to read in the nethserver dokumentation. Checking the USB-HD with fsck.ext4 /dev/sda1 → no errors . Seeking bad blocks - needs hours … result later ~ 3 days
This is status of rclone-wevdav.service

● rclone-webdav.service - Rclone WebDAV server
     Loaded: loaded (/etc/systemd/system/rclone-webdav.service; enabled; preset: enabled)
     Active: active (running) since Wed 2025-08-13 09:05:22 CEST; 24s ago
    Process: 2256010 ExecStartPre=/bin/rm -f /run/rclone-webdav.pid /run/rclone-webdav.cid (code=exited, status=0/SUCCESS)
    Process: 2256011 ExecStart=/usr/bin/podman run --conmon-pidfile=/run/rclone-webdav.pid --cidfile=/run/rclone-webdav.cid -->
   Main PID: 2256029 (conmon)
      Tasks: 1 (limit: 57725)
     Memory: 736.0K
        CPU: 327ms
     CGroup: /system.slice/rclone-webdav.service
             └─2256029 /usr/bin/conmon --api-version 1 -c fa9fecf0bb32a72a6b36931e8c636fc45e97ecf6473618f83aff6c1316f17454 -u >

Aug 13 09:05:22 ns8 systemd[1]: Starting rclone-webdav.service - Rclone WebDAV server...
Aug 13 09:05:22 ns8 podman[2256011]: 
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.285467639 +0200 CEST m=+0.136357441 container create fa9fecf0bb32a72a>
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.212613461 +0200 CEST m=+0.063503293 image pull  ghcr.io/nethserver/re>
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.514732034 +0200 CEST m=+0.365621843 container init fa9fecf0bb32a72a6b>
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.537447676 +0200 CEST m=+0.388337489 container start fa9fecf0bb32a72a6>
Aug 13 09:05:22 ns8 podman[2256011]: fa9fecf0bb32a72a6b36931e8c636fc45e97ecf6473618f83aff6c1316f17454
Aug 13 09:05:22 ns8 systemd[1]: Started rclone-webdav.service - Rclone WebDAV server.
Aug 13 09:05:22 ns8 rclone-webdav[2256029]: 2025/08/13 07:05:22 NOTICE: Local file system at /srv/repo: WebDav Server started >
~

the result of lsblk -f

# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                         
└─sda1
     ext4   1.0   USB3-2T7
                        ddc6ec19-cc99-449d-b273-5bccb866f755    2,4T     5% /var/lib/containers/storage/volumes/backup00/_data
                                                                            /media/tux/
                                                                            USB3-2T7
sr0                                                                         
vda                                                                         
├─vda1
│    vfat   FAT32       1DD4-5691                             505,1M     1% /boot/efi
├─vda2
│    ext4   1.0         f3fed184-2f71-4cb8-a239-1d47be994cdb    129G    75% /var/lib/containers/storage/overlay
│                                                                           /
└─vda3
     swap   1           80f4e634-3d0d-4342-9d2d-3f35a53a6669                [SWAP]

#podman volume ls
DRIVER      VOLUME NAME
local       alloy-data
local       backup00
local       crowdsec1-data
local       promtail-position
local       rclone-webdav
local       redis-data
local       restic-cache


# grep BACKUP_VOLUME /var/lib/nethserver/node/state/rclone-webdav.env
BACKUP_VOLUME=backup00

podman volume inspect volume00

Error: inspecting object: no such volume volume00
# podman volume inspect backup00
[
     {
          "Name": "backup00",
          "Driver": "local",
          "Mountpoint": "/var/lib/containers/storage/volumes/backup00/_data",
          "CreatedAt": "2024-09-26T16:26:42.61646545+02:00",
          "Labels": {
               "org.nethserver.role": "backup"
          },
          "Scope": "local",
          "Options": {
               "device": "/dev/disk/by-id/usb-TOSHIBA_External_USB_3.0_20170123005753F-0:0-part1",
               "o": "noatime"
          },
          "UID": 100,
          "GID": 101,
          "MountCount": 1,
          "NeedsCopyUp": true
     }
]

journalctl -xeu rclone-webdav.service

Aug 13 09:05:22 ns8 systemd[1]: Starting rclone-webdav.service - Rclone WebDAV server...
░░ Subject: A start job for unit rclone-webdav.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit rclone-webdav.service has begun execution.
░░ 
░░ The job identifier is 13922.
Aug 13 09:05:22 ns8 podman[2256011]: 
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.285467639 +0200 CEST m=+0.136357441 container create fa9fecf0bb32a72a>
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.212613461 +0200 CEST m=+0.063503293 image pull  ghcr.io/nethserver/re>
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.514732034 +0200 CEST m=+0.365621843 container init fa9fecf0bb32a72a6b>
Aug 13 09:05:22 ns8 podman[2256011]: 2025-08-13 09:05:22.537447676 +0200 CEST m=+0.388337489 container start fa9fecf0bb32a72a6>
Aug 13 09:05:22 ns8 podman[2256011]: fa9fecf0bb32a72a6b36931e8c636fc45e97ecf6473618f83aff6c1316f17454
Aug 13 09:05:22 ns8 systemd[1]: Started rclone-webdav.service - Rclone WebDAV server.
░░ Subject: A start job for unit rclone-webdav.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit rclone-webdav.service has finished successfully.
░░ 
░░ The job identifier is 13922.
Aug 13 09:05:22 ns8 rclone-webdav[2256029]: 2025/08/13 07:05:22 NOTICE: Local file system at /srv/repo: WebDav Server started >
lines 1-23/23 (END)

2 Likes

The results look ok.
You could check the backups using restic:

Enter an app environment, nextcloud1 for example:

runagent -m nextcloud1

List the destinations:

[nextcloud1@node state]$ restic-wrapper --show
Destinations:
- 66064abc-4b18-5a14-a54b-29b4d932ba1a SMB destination (smb:data/backup/rockytest2)

Check the destination:

restic-wrapper --destination 66064abc-4b18-5a14-a54b-29b4d932ba1a check

… excuse me !
we tried to start backup by hand and after a long time errors again. Machine works hard 7 days and after reboot yesterday, the backup today is OK.
It is a little bit like in windows times, reboot it :wink:
Thanks for the patience …

1 Like

26.08.2025

the same old blues again …

IS THERE NOT A GOOD OLD KIND OF BACKUP ? borgbackup or so, but I have no idea how to handle this containers during backup….

I think this nethserver backup is tooooo complicated …

NethServer uses rclone which is usually a good tool.

We need to find the error, is there some helpful info in the logs?

You could try to backup for example to a NAS to check if the issue is about the backup or the disk.

Thanks for the advice,

Nethserver is installed in the DMZ (orange LAN) and there is no NAS. We don’t like it there because of security risk. That’s why we used USB HDs to backup. … OK we will by a new one, configure and test it.

1 Like

Tested with a new USB-HD - but now it says:

Task module/openldap1/run-backup run failed: {'output': 'Dumping state to LDIF files:\n', 'error': "restic snapshots\nFatal: wrong password or no key found\nInitializing 

All other modules say too “wrong password or no key” …

What this means ? Where should I change the password or what should I change ?

I think you need to create a new backup destination to create a new backup repository on the new disk, see Backup and restore — NS8 documentation

I did so .., I think 5 times, but I will try again …

If you already tried it, I think it’s better to start over with a new local backup volume configuration.

Stop rclone-webdav service:

systemctl stop rclone-webdav

Remove the backup00 volume:

podman volume rm backup00

Follow the steps to create a new local backup from Backup and restore — NS8 documentation

Run the local backup from the UI. I didn’t need to remove or create a new local backup destination.

Follow the steps to create a new local backup from Backup and restore — NS8 documentation

I did so , but I used filesystem ext4 all the time - because of compatible with most linuxes - was this a mistake ? But there were no problems till ~ 07.07.2025 …

1 Like

I like to use xfs as it’s the default in the prebuilt image, reliable and supported by most major distros but I think it should also work with ext4.

Does the backup to the new disk work now?

I’m sorry - the same old blues again. …

Now 3 weeks without included backup. Excuse me, but I’m tired. The handling of neth-backup is tooo complicated, perhaps only for me. Oh, what was NS7 easy to use …

[FIRING:1] backup_failed 1 (7 10.5.4.1:9100 providers Sicherung nach USB-HD critical)

alertmanager@ns8xxx

alert for alertname=backup_failed node=1

  	**\[1\] 			Firing** 			 			
  	**Labels**

alertname = backup_failed
id = 4
instance = 10.5.4.1:9100
job = providers
name = Sicherung nach USB-HD
node = 1
severity = critical
Annotations
description = The backup {name} ({id}) has failed.
summary = Backup failed

but we can see older backups … from 30.08.2025 … looking good

root@ns8:~# journalctl -xeu rclone-webdav.service

I thought you’re using a new disk, why do you see older backups? Did you copy the data from the old to the new disk?

Maybe it’s just one of the apps where the backup doesn’t work.

You could check it in the backup schedule details:

Hello again,

I thought you’re using a new disk, why do you see older backups? Did you copy the data from the old to the new disk?

I try since weeks to find the reason, but everyday several times I get error messages. Backup details everyday in red - and however sometimes there were files on USB-HD.

………………….

Today I removed the USB-HD:

systemctl stop rclone-webdav and podman volume rm backup00

Then for safety I removed the USB-HD. I connected it to another PC and used gparted to create new partition table, new ext4 partition and format it ext4. Then connected back to NS8:

?? How can I remove backup00, backup01 from podman volume list ?? It is confusing. I thought podman volume rm backup00 did this job . (?)

Tomorrow a new chance … or blues again

Disk /dev/sdb: 931,51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: External USB 3.0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x67a61e01

Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1        2048 1953523711 1953521664 931,5G 83 Linux
root@ns8:/# blkid
…..
/dev/sdb1: LABEL="BAK-NS8" UUID="d044d7a6-0e9c-4da9-86c0-d9c74ccec357" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="67a61e01-01"
root@ns8:/~# podman volume create --label org.nethserver.role=backup –opt=device=/dev/disk/by-id/usb-Intenso_External_USB_3.0_2018103015316-0\:0-			part1 --opt=o=noatime backup00
Error: volume with name backup00 already exists: volume already exists
root@ns8:/~# podman volume create --label org.nethserver.role=backup –opt=device=/dev/disk/by-id/usb-Intenso_External_USB_3.0_2018103015316-0\:0-part1 –			opt=o=noatime backup01
Error: volume with name backup01 already exists: volume already exists
root@ns8:/~# podman volume create --label org.nethserver.role=backup –opt=device=/dev/disk/by-id/usb-Intenso_External_USB_3.0_2018103015316-0\:0-part1 –			opt=o=noatime backup02
backup02
root@ns8:~# echo BACKUP_VOLUME=backup02 > /var/lib/nethserver/node/state/rclone-webdav.env
root@ns8:~# cat /var/lib/nethserver/node/state/rclone-webdav.env
BACKUP_VOLUME=backup02

root@ns8:~# podman volume reload
root@ns8:~# podman volume ls
DRIVER      VOLUME NAME
local       alloy-data
local       backup00
local       backup01
local       backup02
local       crowdsec1-data
local       promtail-position
local       redis-data
local       restic-cache

root@ns8:~# systemctl restart rclone-webdav.service
root@ns8:~# systemctl status rclone-webdav.service -l
● rclone-webdav.service - Rclone WebDAV server
     Loaded: loaded (/etc/systemd/system/rclone-webdav.service; enabled; preset: enab>
     Active: active (running) since Tue 2025-09-02 14:33:07 CEST; 35s ago
    Process: 2097953 ExecStartPre=/bin/rm -f /run/rclone-webdav.pid /run/rclone-webda>
    Process: 2097955 ExecStart=/usr/bin/podman run --conmon-pidfile=/run/rclone-webda>
   Main PID: 2098113 (conmon)
      Tasks: 1 (limit: 57725)
     Memory: 1.0M
        CPU: 285ms
     CGroup: /system.slice/rclone-webdav.service
             └─2098113 /usr/bin/conmon --api-version 1 -c 8fb63deaeab6a30569d5576ae1d>

Sep 02 14:33:04 ns8 systemd[1]: Starting rclone-webdav.service - Rclone WebDAV server>
Sep 02 14:33:04 ns8 podman[2097955]: 
Sep 02 14:33:04 ns8 podman[2097955]: 2025-09-02 14:33:04.182414655 +0200 CEST m=+0.11>
Sep 02 14:33:04 ns8 podman[2097955]: 2025-09-02 14:33:04.128088976 +0200 CEST m=+0.06>
Sep 02 14:33:07 ns8 podman[2097955]: 2025-09-02 14:33:07.778481712 +0200 CEST m=+3.71>
Sep 02 14:33:07 ns8 podman[2097955]: 2025-09-02 14:33:07.812473968 +0200 CEST m=+3.74>
Sep 02 14:33:07 ns8 podman[2097955]: 8fb63deaeab6a30569d5576ae1d8ee96b5c455ff9865ce2e>
Sep 02 14:33:07 ns8 systemd[1]: Started rclone-webdav.service - Rclone WebDAV server.
Lines 1-20

Maybe you missed to stop the rclone-webdav service before removing the volume? See Backup on USB-HD fail - #12 by mrmarkuz

You can check the volumes using

podman volume ls
[root@node ~]# podman volume ls
DRIVER      VOLUME NAME
local       redis-data
local       alloy-data
local       restic-cache
local       backup00
local       crowdsec2-data

Thanks,

I’m sorry - you are right. I tested it with backup01.

But how is the handling, to change the USB-HD ? Only HD1 out and HD2 in ?

1 Like

I didn’t test but in the worst case you need to go through the config steps again for HD2 after connecting it.

Or do you want to use more USB disks and rotate them?