Backup traffik fails

Hello,
I back up all my apps and core modules to one S3 storage (Strato). It works smoothly - except for traeffik. After April 14, the Traeffik backup on S3 bucket fails with the following error messages:

Error 1

Task module/traefik2/run-backup run failed: {‘output’: “mkdir: Verzeichnis ‘state-backup’ angelegt\n’traefik.yaml’ → ‘state-backup/traefik.yaml’\n’configs’ → ‘state-backup/configs’\n’configs/_http2https.yml’ → ‘state-backup/configs/_http2https.yml’\n’configs/_api.yml’ → ‘state-backup/configs/_api.yml’\n’configs/it-tools1.yml’ → ‘state-backup/configs/it-tools1.yml’\n’configs/_default_cert.yml’ → ‘state-backup/configs/_default_cert.yml’\n’configs/nextcloud2.yml’ → ‘state-backup/configs/nextcloud2.yml’\n’configs/collabora2.yml’ → ‘state-backup/configs/collabora2.yml’\n’configs/mail2-rspamd.yml’ → ‘state-backup/configs/mail2-rspamd.yml’\n’configs/samba8-amld.yml’ → ‘state-backup/configs/samba8-amld.yml’\n’configs/roundcubemail2.yml’ → ‘state-backup/configs/roundcubemail2.yml’\n’configs/cluster-admin.yml’ → ‘state-backup/configs/cluster-admin.yml’\n’manual_flags’ → ‘state-backup/manual_flags’\n’custom_certificates’ → ‘state-backup/custom_certificates’\n’acme’ → ‘state-backup/acme’\n’acme/acme.json’ → ‘state-backup/acme/acme.json’\n’acme/acme.json.acmejson-notify’ → ‘state-backup/acme/acme.json.acmejson-notify’\n”, ‘error’: ‘restic snapshots\nRepository 520231ba-555b-514c-b6f2-e54f8cde4b8f is present at path traefik/96e26fc6-4d36-424b-816d-2fce05ac72e8\nrestic backup --json state/environment --files-from=/etc/state-include.conf\ntime=“2025-04-14T09:53:34+02:00” level=error msg=“Cleaning up volume (212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500): volume 212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500 is being used by the following container(s): 5cd00f17514c5b2618ff6765451faa4d9dc52bd24064c973d2f610197e89ac8a: volume is being used”\nError: crun: error stat'ing file /home/traefik2/.local/share/containers/storage/volumes/212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500/_data: No such file or directory: OCI runtime attempted to invoke a command that was not found\n<3>Restic restore command failed with exit code 127.\n’, ‘exit_code’: 1}
1

Error2

restic snapshots
Repository 520231ba-555b-514c-b6f2-e54f8cde4b8f is present at path traefik/96e26fc6-4d36-424b-816d-2fce05ac72e8
restic backup --json state/environment --files-from=/etc/state-include.conf
time=“2025-04-14T09:53:34+02:00” level=error msg=“Cleaning up volume (212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500): volume 212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500 is being used by the following container(s): 5cd00f17514c5b2618ff6765451faa4d9dc52bd24064c973d2f610197e89ac8a: volume is being used”
Error: crun: error stat’ing file /home/traefik2/.local/share/containers/storage/volumes/212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500/_data: No such file or directory: OCI runtime attempted to invoke a command that was not found
<3>Restic restore command failed with exit code 127.

What I can do to fix it?

It seems the volume to be cleaned up is still in use.

Does it work to backup only traefik in a separate backup schedule?

All other modules can be properly stored in the same storage.
It makes no difference whether I save traeffik individually or together with other modules.

1 Like

Are there more traefik instances?

ls /home

Let’s check running containers:

runagent -m traefik2 podman ps -a

Let’s check the volumes:

runagent -m traefik2 podman volume ls

1 Like
root@ns-srv01:~# ls /home
collabora2  it-tools1  ldapproxy2  loki2  mail2  metrics1  nextcloud2  roundcubemail2  samba8  traefik2

root@ns-srv01:~# runagent -m traefik2 podman ps -a
CONTAINER ID  IMAGE                             COMMAND     CREATED      STATUS          PORTS       NAMES
5cd00f17514c  ghcr.io/nethserver/restic:3.6.0   init        3 weeks ago  Created                     restic-traefik2-726958
2d343148eb49  docker.io/library/traefik:v3.3.4  traefik     10 days ago  Up 10 days ago              traefik

root@ns-srv01:~# runagent -m traefik2 podman volume ls
DRIVER      VOLUME NAME
local       212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500
local       restic-cache
local       traefik-acme

I don’t have this volume in my traefik.
You could try to remove it (as the backups before today worked it should be safe)

runagent -m traefik2 podman volume rm 212830a18fe22054d39db390e353d26882d820c8697b1e1ac53baebd7bb4c500

If it complains about a running container, stop it:

runagent -m traefik2 podman stop <container_name> 

thanks, that was the solution

1 Like