How can one delete specific backup of instances

Hi,

I have been testing some apps and made various backups (on local storage, the node itself) and now I want to delete these obsolete backups. There are also some faulty backups (not trusted) that I want to remove, so they can not be restored.

TIA

I fear that this is only possible via the console, so far.

  1. You need access to restic, so if it is not already there you need to install it.
  2. You need the encryption password from your cluster.
  3. As you are using local storage i assume the backups are located at /var/lib/containers/storage/volumes/backup00/_data
  4. As root, cd /var/lib/containers/storage/volumes/backup00/_data
    Here all modules have separate directory for backups. Inside these you see one or more directories named with a uuid (i suppose that is the module uuid). Each of these directories is a restic backup repository. Also a file with the same name exists but with a .json extension. Inside this you can find the module_id.
    You can delete the entire directory for a module if you want to get rid of all snapshots for particular module.
  5. For snapshots you can list all for the webserver for example execute restic -r webserver/<uuid> snapshots, at this point you will be asked for you encryption password. A list is presented with an id and a timestamp.
  6. To delete a snapshot use restic -r webserver/<uuid> forget <snapshot-id>. Like restic -r webserver/2c31bdff-d0e0-414e-88fb-cddb4c57cf27 forget b4ac8ead.
  7. To free the disk space used by the “forgot” snapshot run the prune command. Like: restic -r webserver/2c31bdff-d0e0-414e-88fb-cddb4c57cf27 prune

Hope that helps. Always have backup of backups. :wink:

2 Likes

Just updated with the prune command in step 7 i initially forgot. This is required to free the disk space.

1 Like

Warning.
Did some clean up of my backups for my webserver while i was at it.

Today my backup of the webserver failed. I did not yet find the root cause of the issue, but i suspect that it is related to the cleaning yesterday. Backup of other modules seems to be fine.

I only have a few log messages to work with:

rclone: 2025/07/24 10:55:10 ERROR : index/a67ddcaa12e1a6d68207b40a142662377c33a9be3c0cbf4ca4d4b47efaf043d2: Didn't finish writing GET request (wrote 0/3075 bytes): unexpected EOF\nLoad(<index/a67ddcaa12>, 0, 0) returned error, retrying after 1m2.282507972s: Copy: unexpected EOF\nrclone: 2025/07/24 10:56:12 ERROR : index/a67ddcaa12e1a6d68207b40a142662377c33a9be3c0cbf4ca4d4b47efaf043d2: Didn't finish writing GET request (wrote 0/3075 bytes): unexpected EOF\nLoad(<index/a67ddcaa12>, 0, 0) failed: Copy: unexpected EOF\ncircuit breaker open for file <index/a67ddcaa12>\n<3>Restic restore command failed with exit code 1.\n’, ‘exit_code’: 1}

I am still investigating.

2 Likes

This is the method I use without any restic command and there are no issues when the next backup runs.

There’s a restic-wrapper command, maybe it works better using that command, see Backup and restore — NS8 documentation

Thanks.

Actually no, here is my volumes directory:

[root@srv01 volumes]# ll
total 32
drwx------ 3 root root 4096 Jul 16 21:10 7204d2feef7af48b61e68efa09149b4aaf5ee231d0d1e4d5555196445d85bab9
drwx------ 3 root root 4096 Jul  4 23:58 alloy-data
drwx------ 3 root root 4096 Jul 16 21:10 b414f2389658143c0d53674fc7c248af708a58896d1bd6d312d443e551e52170
drwx------ 3 root root 4096 Jul  5 00:06 crowdsec1-data
drwx------ 3 root root 4096 Jul 16 21:10 fc0fd4b65c817460d4dd209e15dec167e69345b27374e91dacbc67c76c39def7
drwx------ 3 root root 4096 Jul  4 23:29 rclone-webdav
drwx------ 3 root root 4096 Jul  4 23:29 redis-data
drwx------ 3 root root 4096 Jul  5 03:30 restic-cache

Did you setup a volume for local storage backup?

I only have the local disk, no attached storage, the default local storage on the node

[root@srv01 ~]# runagent restic-wrapper --show
Destinations:
- 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 Lokale Opslag bestemming (webdav:http://10.5.4.1:4694)
Scheduled backups:
- 1 Backup naar Lokale Opslag bestemming, destination UUID 86d1a8ac-ef89-557a-8e19-8582ab86b7c4
- 13 Immich, destination UUID 86d1a8ac-ef89-557a-8e19-8582ab86b7c4

In that case the backup data should be in following volume:

Indeed, thanks.

[root@srv01 _data]# pwd && ll
/var/lib/containers/storage/volumes/rclone-webdav/_data
total 80
drwxr-xr-x 3 100 ssh_keys 4096 Jul  5 03:30 crowdsec
drwxr-xr-x 3 100 ssh_keys 4096 Jul 20 03:15 docmost
-rw-r--r-- 1 100 ssh_keys 1071 Jul 22 18:09 dump.json.gz.gpg
drwxr-xr-x 3 100 ssh_keys 4096 Jul 17 21:38 goauthentik
drwxr-xr-x 5 100 ssh_keys 4096 Jul 20 11:17 immich
drwxr-xr-x 3 100 ssh_keys 4096 Jul 17 22:52 it-tools
drwxr-xr-x 3 100 ssh_keys 4096 Jul 18 01:49 joplin
drwxr-xr-x 3 100 ssh_keys 4096 Jul 14 03:15 kitchenowl
drwxr-xr-x 3 100 ssh_keys 4096 Jul  5 00:00 loki
drwxr-xr-x 3 100 ssh_keys 4096 Jul  6 03:15 mail
drwxr-xr-x 4 100 ssh_keys 4096 Jul 18 01:30 matomo
drwxr-xr-x 3 100 ssh_keys 4096 Jul 13 03:17 nextcloud
drwxr-xr-x 3 100 ssh_keys 4096 Jul  7 03:15 opencloud
drwxr-xr-x 3 100 ssh_keys 4096 Jul  5 03:30 openldap
drwxr-xr-x 3 100 ssh_keys 4096 Jul  7 03:15 rustdesk
drwxr-xr-x 3 100 ssh_keys 4096 Jul 17 22:17 semaphoreui
drwxr-xr-x 3 100 ssh_keys 4096 Jul  6 03:15 sogo
drwxr-xr-x 3 100 ssh_keys 4096 Jul  6 03:15 stirlingpdf
drwxr-xr-x 3 100 ssh_keys 4096 Jul  5 00:00 traefik
drwxr-xr-x 3 100 ssh_keys 4096 Jul 13 03:15 twenty

So to remove all backups (so not a specific snapshot) I can simply delete the e.g. rustdesk directory from above?

Yes, that should work.

1 Like

@Viking Here are some examples how to use the restic-wrapper:

1 Like

Removed the Immich directory, deleted the backup job. Created a new job and ran the backup. Failed…

Please see Task module/immich11/run-backup run failed: {'output': 'Dumping immich postgres - Pastebin.com

The error is: Fatal: wrong password or no key found

Did you also delete the backup destination?

To recreate a destination where you already have backups, you need to get the data encryption key…

…and enter it in the new destination to be able to read the already stored files.

If you deleted the destination and the encryption key isn’t there anymore, you’ll need to remove the backup files and start over using a new destination.

Exactly what I did, backup is now running, so let’s see if it finishes without error for it is many Gb.

Update: success

Now for the procedure suggested by @Viking … (using restic-wrapper)

2 Likes

@Viking Here are some examples how to use the restic-wrapper:

@mrmarkuz , did not know it until now. :slight_smile: I wondered however as there is a restic image in every module.
Thanks.

1 Like

Just wondering, are the snapshots incremental or full? Are there no dependencies between snapshots? I am asking for a full initial backup of my immich app takes 12 minutes and a nightly with very few changes is around 1 minute and 40 secs…

I think we need some proper dev troubleshooting guides and wiki, for edge case situations and situations, most during development testing.

Sometime servers do begin to for some reasons act out, afte rmultiple install and rmeoval of modules, as well as backups and deletions.

clear ways and methods of safe removals and resettings are required

It is more like differential i would say. You can remove any snapshot, only unique stuff to that snapshot will be remove, i.e stiff that is not included in other snapshots.
You can read more about it in the restic docs: Removing backup snapshots

1 Like