So deleted that backup destination and recreated it, entering host.domain/bucketname as the “Bucket address.” Re-started the backup, and it’s running away. It hasn’t finished yet, but nearly 40 GB of data have been copied so far, so it’s looking good.
So if anyone else is running into trouble, one place to check is the “Bucket address.” If your provider prefixes the FQDN with your bucket name (iDrive e2 does, as does Digital Ocean Spaces; I don’t know how common that is among S3-compatible providers), strip that out, and put it on the end of the address.
Other points:
- Error handling/reporting has to get better. Even “Backup failed, please check the system logs for errors” would have been an improvement (with bonus points if it linked to the system log page). But what really needs to happen is that the UI show the actual error, rather than multiple levels of clicking through, to get it to tell you to run a shell command, that doesn’t tell you anything[1]. This was a major problem with NS7, and it’s disappointing to see that it (apparently) hasn’t improved with a complete rewrite of the system.
- Configuration of a S3-compatible backup destination could likewise stand to improve, with a few possibilities:
- Best case, accept bucket addresses in the form of
bucketname.host.domaindirectly. If iDrive and DO Spaces both use that form, I assume that’s pretty common, so accept it. If needed, do whatever parsing is necessary to turn it intohost.domain/bucketnameinternally. - Failing that, be explicit in the GUI that this is what’s needed, perhaps in a tooltip–“be explicit” would mean something like "If your bucket address begins with the name of your bucket, remove it, and add it to the end, separated by a slash.`
- Failing even that, at least be explicit in the docs that this is what’s required.
- Best case, accept bucket addresses in the form of
- Validation of the backup destination should also improve. Some validation is obviously done, but not to the point of writing any data. I don’t know what the equivalent of
touch testfilewould be on remote S3 storage, but it seems that ought to be done to make sure it is, in fact, working.
I assume the reason for this is that all the applications run in containers, and if I’d run those commands inside the containers, I would have gotten more meaningful results–but nothing in what the GUI showed me indicated this. ↩︎