Backup sets anatomy

Dear everyone, I just setup a incremental backup on server every day and sunday run a full backup. Set to delete backups older than 7 days. I want to figure out that what will happen on next sunday.

  1. Will the next full backup on next sunday be scheduled after deleting the old backups including incremental backups?
  2. If the space of Nas drive getting low, the process will be hanging?
  3. If the deleting process is followed by the creating new set, can we convert the process to first to deleting and then creating? (To reserve space on Nas)


It depends on the backup you use. Usually old backups are deleted after backup:

Cleanup process takes place in post-backup-data event.

It seems rsync backup just deletes until enough space is free:

1 Like

I monitored the backup process through the night to verify if there is enough space on NAS drive.
Unfortunetely old backups (neither full nor incrementeal) were not deleted at the end of the backup process even if it is ended successfully. Now on the NAS there are two backup sets - 2 full backup and 2 incrementels sets which are belonging to those full backups. The log says;

"There are backup set(s) at time(s):
Sat Apr 18 22:00:20 2020
Which can’t be deleted because newer sets depend on them."

I used cockpit to setup this backup as new backup not the default backup named “backup-data”.
If I set it to 1 day, it will run full backup daily?

I assume yes because duplicity only supports incremental backup but to be sure set the type to full.

I set incremental as type.
Why my old backups were not deleted yesterday even if i have set to 7 days?

In this scenario is it possible to run backup with 13 incremental and one full backup. As it explains one backup sets including 13 incremental backup?

Hi Nirosh,

Do you want to keep 1 backup only ?


Hello michelandre, yes i want to keep one backup set because the Nas is running out of space. As i mentioned before it is better to have 1 full set and 13 incrementals. Thanks

Hi Nirosh,

I do not know if it applies to you but lately I am working on Zammad and I wanted to keep only 1 daily backup of it. At some point, a Zammad backup might take quite a lot of disk space.

If needed, you can also adjust the variable HOLD_DAYS to any value you need. Default value here is 10 backups before the oldest backup is deleted.

I set the parameter HOLD_DAYS to 1 and it always kept, not 1 but the 2 last Zammad backups.

The script uses the find command with the -mtime + HOLD_DAYS option to find the modification date of the backup files. The file was last modified (+ HOLD_DAYS) * 24 hours ago. When find calculates the number of 24-hour periods since the file was last modified, any fractional part is ignored. Therefore, to match -mtime +1, a file must have been processed at least two days ago

To keep only the last backup, I had to set HOLD_DAYS to 0. It worked fine and kept only 1 backup.

Interesting. Can you explain how to acheive that ?


It is achievable by running 14 days of backup set. 1backup set. As your example is really helpful I would like to try that