More information for you.
On my Proxmox image I did this:
runagent -m backuppc1
systemctl --user stop backuppc
podman ps # verify no container is still runnning
podman image ls
podman image rm -f <ID>
Then also ran:
systemctl --user disable backuppc
which stops the pod starting automatically on boot over at ESXi.
This image was then copied/converted to my ESXi system and a snapshot taken (isn’t this a wonderful feature) before starting.
The first test was just a start of the pod, to see if rebuilding the pod helps. But nope, still drops into the timeout loop.
The next tests was deleting these volumes, one per restart attempt, (except home, as I think that being missing would cause other issues) to see if any are the culprit:
[backuppc1@ns8 state]$ podman volume ls
DRIVER VOLUME NAME
local data
local conf
local home
local logs
local restic-cache
[backuppc1@ns8 state]$
The only deletion where the pod didn’t drop into the timeout loop, was when I deleted data, which took 45 minutes. Following this the pod was able to start normally. Checking in the UI, obviously there were no old backups to be seen, but curiously there were also no logs either, despite not removing the logs volume. But all my configurations were still intact, which is a “good thing”.
Based on this I’m guessing (Note: My opinion only) that when the pod starts, it somehow knows it’s been moved, and is re-scanning all the volumes (for whatever reason) which is taking longer than the timeout.
Maybe yet another reason to get an answer here.
Cheers.