Zabbix 6.0 LTS is out!

3 containers:

  • postgres
  • zabbix-server
  • zabbix-nginx-web


My notes for running the containers (docker version):


docker run --name postgres-server -e POSTGRES_PASSWORD=secret -d postgres:latest

Zabbix server:

docker run --name zabbix-server-pgsql -e DB_SERVER_HOST="postgres-server" -e POSTGRES_USER="postgres" -e POSTGRES_PASSWORD="secret" -d zabbix/zabbix-server-pgsql:alpine-6.0-latest

Zabbix web:

docker run --name zabbix-web-nginx-pgsql -e DB_SERVER_HOST="postgres-server" -e POSTGRES_USER="postgres" -e POSTGRES_PASSWORD="secret" -e ZBX_SERVER_HOST="zabbix-server-pgsql" -e PHP_TZ="Europe/Vienna" -d zabbix/zabbix-web-nginx-pgsql:alpine-6.0-latest

Zabbix agent:

docker run --name zabbix-agent -e ZBX_HOSTNAME="zabbix-server-pgsql" -e ZBX_SERVER_HOST="zabbix-server-pgsql" -d zabbix/zabbix-agent2:alpine-6.0-latest

1 Like

Pre-coffee question:
For every container a new independent db-container (any RDBMS is necessary) is raised-configured-deployed?

1 Like

Yes it is like this I would do my work. Inside the pod a container for the database others for the application

I can understand the approach for data security and atomization. Any kind of issue or hiccup on a centralized DBMS would cause problems for all the applications that rely on the service/daemon.
On the other hand… 5 application, 5 dbms instances.

1 Like


Same issue we’re already having with MariaDB:

  • A built in, very “old” Version included with Centos7
  • A newer one for those using NextCloud.
  • Other versions I’m not fully aware of?

I don’t quite see the problem? It works for me & my 30 clients, and countless others here in the forum.

If someone needs 5 “Apps” running each with their own DB, there needs to be 5 instances running.

You’re sounding like an ex Windows user: “Linux is eating up all my RAM…”

→ Linux almost always allocates all RAM, that not used by an application is allocated for caching or such, so no free RAM…

My 2 cents

IMVHO yes, you don’t. And it’s fine, as you told,

“it works = it’s good and i don’t have to think about it.” And I can’t debate that
On the other hand, the overhead is anyway present, IMVHO, and as one designed in first instance, mono-firm distro, wasting CPU, hard drive space, memory, processes for multiple instances for any application distributed.

For devops, thirdy party service provider, multi service companies is perfect. Containers are bit less cpu and ram wasteful than VM on bare metal (less memory used due to not stacking Hypervisor AND OS on top. As GDPR, containers can deliver authentication separation and the manager of the container system can remotely operate to the container without accessing data via the container management tools.

But for a company?
In my (twisted?) perception, on bare metal i’d go with type hypervisors, which allows me to have multiple distros, operating sistems and duplicated hosts in few steps. Going container into virtualized system stacks resources of Hypervisor, OS, and container, with related higher disk consumption and resourced used for manage the stack instead manage informations.

And going bare metal for big enough companies is call themselves for issues. A server class hardware can be guarded via manteinance contracts of good enough scheduled buys of quality and reliable hardware. When the metal is not enough, you change the box and after 4-6 hours top you’re ready to roll. Ok, maybe it’s a bit longer list of steps, but anyway when first guest is up, can be powered on if the design is good enough (and I totally go for virtual firewall if i can, against IpFire project suggestions).

Container? Host has to be raised, updated, installed the container management. Then restore the whole container lot requested for single applications, then restored the content of the container, untangle network paths and rules. Same 4-6 hours? Maybe it’s all due to ignornance, but still don’t seem to be realistic. At least 2h for have the container lot arranged, then start the single container restore.

Again… context. Container context don’t seem to fit the Small company goal than Nethesis during years suggested.

But I can’t blame them.
First of all, containers are requested from market and being outside that approach is quite a “you can’t miss it”.
Also, containers are a viable path for Nethesis to add features to the product without tinker it from scratch the distro underlying for any module. If the projects delivers a container recipe, this can be tested, tweaked, personalized via the configuration db approach, then deliver it. They go from source tinkering to recipe tinkering, crushing the load on mirrors for packages (that can be retrieved by mirrors of the single project and not from Neth) and delivering only the recipes…

Any customer tomorrow can receive faster a new webapp, in few days, without having to ask themselves “And with this new toy, how will sing the other WHOLE LOT of packages to the flush?”

Untangling networks and firewall rules among containers is… IMVHO the biggest riddle.


Hi Michael

If you’re doing something for the first time, you also need to calc in the “learning time”.

If doing something 3 times or more often, like me with my 30 clients (SME companies from 2 - 35 users), it’s often simpler to start using scripts with variables, or ansible cookbooks to get the environment “up and running”. What you use is preferences, environment dependent, but almost any commercial used environment can be scripted.

So we’re talking about basically a one-liner (calling the script!).
That may run 20-40 minutes, unless you’re using a way too slow or old machine…
But so what? I just need to kick it off, not wait around…

Simple enough: Run a single VM on Proxmox dedicated to running Containers, using whatever tool you prefer. This can be Kybernetes - or NethServer 8!
Even on an entry level server, the overhead (Container in a VM) can be neglected…
The firewall would be a seperate box, so no more challenging rules than simple port forwarding…

The newer concepts / methods still have to be learned and practiced…


My 2 cents

Then why waste resources with Proxmox? :slight_smile:


Because I / My clients may need other stuff, like a Terminal Server (Windows) which can’t easily run in a Container… :slight_smile:

A doctor needs to run a Windows Server for the doctors application - and a Terminal Server, both members in a NethServer AD…

The Mac Mini handles all DICOM / Medical Images (X-Ray, Tomography), all else runs on Proxmox.
And yes, this is a doctors practice. But I have to reuse an old HP Microserver Gen7 with 8 GB RAM as Proxmox Backup Server - at least new Disks and SSD. Budget restraints due to Covid…

There are plenty of reasons for other environments…

Proxmox has then the advantage of having a single point to handle disaster recovery.
Backups, Snapshots, but also Hardware (ZFS raid, Power supply) redundancy is taken care of by Proxmox.

My 2 cents

I think I need to change the docker network to macvlan or add shorewall rules to aqua to make it possible to monitor LAN clients.

The Zabbix 5 database migration to the container is working in first tests.

You need to open tcp ports to aqua ?

1 Like

The zabbix-server container needs to access the LAN clients which is forbidden by shorewall policy. So I need to allow port 10050 from aqua to LAN.
Another way would be macvlan.

Another issue is that after a migration one has to change the config of the agents to allow the new Zabbix server IP.



Don’t forget the SNMP ports 161 - 162…

Printers and commercial NAS don’t have Zabbix agents, but SNMP.


My 2 cents

1 Like

Yes it makes sense, if you have few ports it could work with aqua, I saw that the agent could run with docker too but I cannot admit how it could work :smiley:

1 Like

Oh no, I allowed the wrong aqua IP instead of the green interface one. :unamused:
It’s working perfectly using aqua. When migrating a system from Zabbix 5 to docker, there’s no need to change IPs on agents.

I think the docker agent is just for testing purposes.