When I started I didn’t have a big budget so I decided to do a raspberry pi setup with multiple vpns and separated pkis to keep things clean. In the begining it all ran on raspbian without containers, some services had multiple instances I got running in different processes and there was a lot of perl/bash + cron based duct tape to keep it from blowing up as well as iptables and routing voodo. But it did blow up. Occasionally, services would go down because of unpredictable interactions between my scripts, or even with the underlying hardware (pilfered hard drives that should have been retired but had to work together with mhddfs).
As you might imagine, updates could break the system and as time went on it became harder and harder to get everything runing again when the sd card died (probably a side effect of me writing and reading a lot from it since I couldnt trust the hard drives).
So one day I saw the light and decided that I might as well try to do things properly. I made a new setup using RancherOS (ultra lightweight linux distro that is only meant to run docker containers) and ansible.
After about a week of work I had created ansible roles to replace my scripts, automated my image building pipeline (all images had to be built both on ARM and x86/64 so I could deploy them anywhere, furthermore I wanted an easy way to make any image able to use a vpn and do so without leaking) and created a generic role for handling deployments.
So right now I have a rudimentary framework that allow me to just add images to a yaml config file while specifying the architecture, privileges and whether I want a specific command to be ran once the container starts and I can run it anywhere. There is also another role for vpn deployment, certificate generation/revocation and such. As I got interested in running more services I either created new roles if required or just plugged them in my main deployment/building roles.
I used it a lot to help friends deploy websites or even a Ctf event.