Other than the obvious differing formats for the 2 pre-built images are there any differences in the configuration of the underlying Rocky OS.
I’m currently running in Proxmox, but need to move this over to ESXi. When I converted my current Proxmox VM over to vmdk format and started it in ESXi, the CPU usage went through the roof.
I got to the bottom of this literally about 2 minutes ago.
Restarting the VM in a different Hypervisor caused the IP address to change, as my DHCP server saw a different MAC address. Now for NS8 this isn’t too bad a problem, as most of the apps don’t care about the IP address and even the File Server appeared to work without having to tell it that the IP has changed. (I’ll probably test this a little more, as doing a disaster restore to a different IP caused this part to not work until I ran the command for set-ipaddress).
But, it looks like BackupPC really doesn’t like this as it was constantly re-spawning a rebuild of the pod, from what I could tell following top and repeated “ps -ef” commands. As well as the massive CPU increase this also made connecting via the UI problematic as everything continually timed out. Eventually from the command line I was able to remove BackupPC and instantly the CPU usage dropped to what I would expect.
I haven’t tried yet, but I’m guessing a restore of BackupPC to an NS8 running on a different IP to it’s original configuration will have a similar result.
Well, it’s not as simple as just the IP being changed. I forced the IP on the ESXi side to be the same as it was prior to moving the VM, which didn’t help.
The backuppc pod is still re-spawning itself constantly. Where can I find any information to see what the issue might be, remembering that the UI is almost unusable due to timeout issues. There is nothing in /var/log/messages that help.
On the original Proxmox side I changed the IP and this had no effect on BackupPC, which still appears to work normally.
So I’m at a loss as to why relocating the VM is doing what it it.