Validation errors: [backups.1.id: Must be greater than or equal to 1]

NS8: 2.8.4
OS. Debian 12

Re installed my system and restored from backup. Made a base install and before login in the first time
i made the local backup storage available as per manual.
Fetched the cluster backup file and restored all modules, apparently restoring traefik and loki seems not to be a good idea as after restore you will have 2 of each and one of each will be unhappy.

Removing one of each with the remove-module command seems to solve this.

I have a samba and a mail app.
The mail service would not accept any incoming mails so i had to uninstall it and re-install 2 times before it was ok again.

Then i wanted to re-configure the backup, but except for the “cluster backup” button which works, then when going into that backup page the following error occurs.

jun 20 21:06:33 nethserver agent@cluster[363]: Action list-backups aborted at step /var/lib/nethserver/cluster/actions/list-backups/validate-output.json: [backups.1.id: Must be greater than or equal to 1]
jun 20 21:06:33 nethserver agent@cluster[363]: task/cluster/1490cdb5-a844-41c0-a7fa-cb765bc3d7d6: action "list-backups" status is "aborted" (1) at step validate-output.json

I tried to move the backup files from the disk that i use as temporary local storage backup.
This did not change anything, so seems not to be related to the content of the backup storage.

Any ideas are most welcome.

Please run this command as root and check its output

api-cli run list-backups | jq

There must be something broken in the backup definition at array index 1.

Maybe removing the corresponding backup schedule from Redis will fix that error.

So this command also shows the error:

root@nethserver:~# api-cli run list-backups | jq > list-backups.json
Warning: using user "cluster" credentials from the environment
Validation errors: [backups.1.id: Must be greater than or equal to 1]

The output shows 2 almost identical backups.

{
  "backups": [
    {
      "id": 1,
      "name": "Backup to Local storage repo",
      "repository": "86d1a8ac-ef89-557a-8e19-8582ab86b7c4",
      "schedule": "*-*-* 01:00:00",
      "schedule_hint": {
        "custom": "",
        "interval": "daily",
        "minute": "0",
        "monthDay": "1",
        "time": "01:00",
        "weekDay": "sunday"
      },
      "retention": 60,
      "instances": [
        {
          "module_id": "traefik1",
          "ui_name": "",
          "repository_path": "6aa5d6fa-482f-4884-aecd-05810592cbec",
          "status": {
            "total_size": 572,
            "total_file_count": 2,
            "snapshots_count": 1,
            "start": 1718924426,
            "end": 1718924438,
            "success": true
          }
        },
        {
          "module_id": "sogo2",
          "ui_name": "",
          "repository_path": "f05a58f1-3194-4b91-bc27-60f452fe64af",
          "status": {
            "total_size": 3177620,
            "total_file_count": 21,
            "snapshots_count": 1,
            "start": 1718924429,
            "end": 1718924446,
            "success": true
          }
        },
        {
          "module_id": "samba1",
          "ui_name": "",
          "repository_path": "958a5d04-b024-4588-a0ae-01766db425d4",
          "status": {
            "total_size": 1652548,
            "total_file_count": 17,
            "snapshots_count": 1,
            "start": 1718924444,
            "end": 1718924460,
            "success": true
          }
        },
        {
          "module_id": "mail3",
          "ui_name": "",
          "repository_path": "1e2d8ef6-40de-48bb-bc66-33d1a6f8b1df",
          "status": {
            "total_size": 10165632177,
            "total_file_count": 71880,
            "snapshots_count": 1,
            "start": 1718924430,
            "end": 1718925123,
            "success": true
          }
        }
      ],
      "enabled": true
    },
    {
      "id": 0,
      "name": "Backup to Local storage repo",
      "repository": "86d1a8ac-ef89-557a-8e19-8582ab86b7c4",
      "schedule": "*-*-* 01:00:00",
      "schedule_hint": {
        "custom": "",
        "interval": "daily",
        "minute": "0",
        "monthDay": "1",
        "time": "01:00",
        "weekDay": "sunday"
      },
      "retention": 60,
      "instances": [
        {
          "module_id": "traefik1",
          "ui_name": "",
          "repository_path": "6aa5d6fa-482f-4884-aecd-05810592cbec",
          "status": null
        },
        {
          "module_id": "sogo2",
          "ui_name": "",
          "repository_path": "f05a58f1-3194-4b91-bc27-60f452fe64af",
          "status": null
        },
        {
          "module_id": "loki1",
          "ui_name": "",
          "repository_path": "1f59ee4a-0506-482d-aef9-33b0fb3d1188",
          "status": null
        },
        {
          "module_id": "samba1",
          "ui_name": "",
          "repository_path": "958a5d04-b024-4588-a0ae-01766db425d4",
          "status": null
        },
        {
          "module_id": "mail3",
          "ui_name": "",
          "repository_path": "1e2d8ef6-40de-48bb-bc66-33d1a6f8b1df",
          "status": null
        }
      ],
      "enabled": true
    }
  ],
  "unconfigured_instances": []
}

1 Like

Should start from 1 :), there’s a bug

To fix it, try to run

api-cli run remove-backup --data '{"id":0}'

I agree. They are special core modules. They need a different procedure /cc @Tbaile

1 Like

Thanks @davidep,

I saw your like, and the not so much anymore, so decided to dig deeper on my own.

After investigating the api-cli stuff a little i found that there is ‘remove-backup-repository’ action which i used. This cleared both entries, and the error does not show anymore. :slight_smile:

Currently i am struggling with attaching the existing backup repo again.
When creating the repo in UI, it claims to be possible by entering the password used under the advanced setting. But so far i have no joy. :confused:

If everything else fails, i guess could inject the repo data via the add-backup action.

The reuse of the old repo works after all. :slight_smile:
The password can be recovered from the cluster backup file. :wink:
Just the scheduling has to be setup once again.

1 Like

Without the original encryption password, any path under the backup repository (destination) is useless!