Hello.
For some reason I have no system logs on NS8, for all, cluster, node and application. This is a newly configured system but been online for last 2 weeks, with some activity. Doing some digging I suspect its an Loki issue, so I installed a new version of loki and set is as default, as recommended here.
Here was is the issue I was seeing.
● loki-server.service - Loki server
Loaded: loaded (/home/loki1/.config/systemd/user/loki-server.service; disabled; preset: disabled)
Active: active (running) since Mon 2026-02-09 02:58:36 UTC; 4min 25s ago
Process: 1832 ExecStartPre=/bin/rm -f /run/user/1003/loki-server.pid /run/user/1003/loki-server.ctr-id (code=exited, status=0/SUCCESS)
Process: 1834 ExecStart=/usr/bin/podman run --detach --conmon-pidfile=/run/user/1003/loki-server.pid --cidfile=/run/user/1003/loki-server.ctr-id --cgroups=no-conmon --pod-id-file /run/user/>
Main PID: 1877 (conmon)
Tasks: 1 (limit: 202483)
Memory: 836.0K (peak: 22.6M)
CPU: 101ms
CGroup: /user.slice/user-1003.slice/user@1003.service/app.slice/loki-server.service
└─1877 /usr/bin/conmon --api-version 1 -c e527b23cfac917b2dc718ae31b0fed975ff85c230fbc0f96677294e1f3d80775 -u e527b23cfac917b2dc718ae31b0fed975ff85c230fbc0f96677294e1f3d80775 -r /u>
Feb 09 02:58:34 server systemd[1455]: Starting Loki server...
Feb 09 02:58:34 server podman[1834]: 2026-02-09 02:58:34.228724636 +0000 UTC m=+0.045248957 image pull 0d9213f68faa286f8e9bc26458dc96933ec30fa25df30b03c704bb383ac98e36 docker.io/grafana/loki:3.>
Feb 09 02:58:35 server podman[1834]: 2026-02-09 02:58:35.50266235 +0000 UTC m=+1.319186669 container create e527b23cfac917b2dc718ae31b0fed975ff85c230fbc0f96677294e1f3d80775 (image=docker.io/gra>
Feb 09 02:58:36 server podman[1834]: 2026-02-09 02:58:36.443428941 +0000 UTC m=+2.259953257 container init e527b23cfac917b2dc718ae31b0fed975ff85c230fbc0f96677294e1f3d80775 (image=docker.io/graf>
Feb 09 02:58:36 server podman[1834]: 2026-02-09 02:58:36.447264458 +0000 UTC m=+2.263788775 container start e527b23cfac917b2dc718ae31b0fed975ff85c230fbc0f96677294e1f3d80775 (image=docker.io/gra>
Feb 09 02:58:36 server podman[1834]: e527b23cfac917b2dc718ae31b0fed975ff85c230fbc0f96677294e1f3d80775
Feb 09 02:58:36 server systemd[1455]: Started Loki server.
Feb 09 02:59:39 server loki-server[1877]: level=warn ts=2026-02-09T02:59:39.220620968Z caller=modules.go:769 msg="The config setting shutdown marker path is not set. The /ingester/prepare_shutd>
Feb 09 02:59:39 server loki-server[1877]: level=error ts=2026-02-09T02:59:39.776088743Z caller=ratestore.go:109 msg="error getting ingester clients" err="empty ring"
And for promtail
Feb 09 02:58:25 server systemd[1]: Started Alloy (previously Promtail) logs collector for Loki.
Feb 09 03:00:13 server promtail[1648]: [controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
Feb 09 03:00:13 server promtail[1648]: Detected at:
Feb 09 03:00:13 server promtail[1648]: > goroutine 1 [running, locked to thread]:
Feb 09 03:00:13 server promtail[1648]: > runtime/debug.Stack()
Feb 09 03:00:13 server promtail[1648]: > /usr/local/go/src/runtime/debug/stack.go:26 +0x5e
Feb 09 03:00:13 server promtail[1648]: > sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
Feb 09 03:00:13 server promtail[1648]: > /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.4/pkg/log/log.go:60 +0xcd
Feb 09 03:00:13 server promtail[1648]: > sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc000a5c5c0, {0xce4f521, 0x6})
Feb 09 03:00:13 server promtail[1648]: > /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.4/pkg/log/deleg.go:147 +0x3e
Feb 09 03:00:13 server promtail[1648]: > github.com/go-logr/logr.Logger.WithName(...)
Feb 09 03:00:13 server promtail[1648]: > /go/pkg/mod/github.com/go-logr/logr@v1.4.3/logr.go:345
Feb 09 03:00:13 server promtail[1648]: > sigs.k8s.io/controller-runtime/pkg/client/config.init()
Feb 09 03:00:13 server promtail[1648]: > /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.4/pkg/client/config/config.go:37 +0x4d
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.93357918Z level=info boringcrypto_enabled=false
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.555952902Z level=info source=/go/pkg/mod/github.com/!kim!machine!gun/automemlimit@v0.7.1/memlimit/memlimit.go:175 msg="memory is not limited, skipping" package=github.com/KimMachineGun/automemlimit/memlimit
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933643491Z level=info msg="no peer discovery configured: both join and discover peers are empty" service=cluster
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933647832Z level=info msg="running usage stats reporter"
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933650898Z level=info msg="starting complete graph evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933660556Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=labelstore duration=7.11µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933666404Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=loki.write.default duration=23.825563ms
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933671093Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=discovery.relabel.journal duration=103.389731ms
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933675406Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=loki.source.journal.journal duration=157.490728ms
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933680854Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=logging duration=106.556µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933739353Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=remotecfg duration=51.781µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933754856Z level=info msg="applying non-TLS config to HTTP server" service=http
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933759558Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=http duration=12.022µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933768622Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=cluster duration=2.165µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933781099Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=tracing duration=6.035µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933789589Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=otel duration=2.835µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933801203Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=livedebugging duration=6.009µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933810084Z level=info msg="finished node evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 node_id=ui duration=2.674µs
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933815573Z level=info msg="finished complete graph evaluation" controller_path=/ controller_id="" trace_id=8d16d0762729279ed4068d15bcf732b4 duration=314.347788ms
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.933901357Z level=info msg="scheduling loaded components and services"
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.978828065Z level=info msg="starting cluster node" service=cluster peers_count=0 peers="" advertise_addr=127.0.0.1:12345 minimum_cluster_size=0 minimum_size_wait_timeout=0s
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.979056451Z level=info msg="peers changed" service=cluster peers_count=1 min_cluster_size=0 peers=server
Feb 09 03:00:13 server promtail[1648]: ts=2026-02-09T03:00:13.979515038Z level=info msg="now listening for http traffic" service=http addr=127.0.0.1:12345
I was wondering what steps I can take to diagnose this.
Thank you for your response, I checked out the discussion you linked, I found some more information when running journalctl _UID=$(id -u loki2).
Feb 09 03:17:52 server systemd[10378]: Created slice cgroup user-libpod_pod_3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d.slice.
Feb 09 03:17:52 server podman[10946]: 2026-02-09 03:17:52.48166199 +0000 UTC m=+0.415395999 container create c704e87e185a7bc7c5eef392a50550fb4848d5d46a07e61f7565b31b28dc29a9 (image=, name=3d815>
Feb 09 03:17:52 server podman[10946]: 2026-02-09 03:17:52.544887411 +0000 UTC m=+0.478621414 pod create 3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d (image=, name=loki)
Feb 09 03:17:52 server podman[10946]: 3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d
Feb 09 03:17:52 server systemd[10378]: Started libcrun container.
Feb 09 03:17:52 server podman[10961]: 2026-02-09 03:17:52.736194276 +0000 UTC m=+0.173019213 container init c704e87e185a7bc7c5eef392a50550fb4848d5d46a07e61f7565b31b28dc29a9 (image=, name=3d815a>
Feb 09 03:17:52 server podman[10961]: 2026-02-09 03:17:52.740152822 +0000 UTC m=+0.176977759 container start c704e87e185a7bc7c5eef392a50550fb4848d5d46a07e61f7565b31b28dc29a9 (image=, name=3d815>
Feb 09 03:17:52 server podman[10961]: 2026-02-09 03:17:52.785764024 +0000 UTC m=+0.222588961 pod start 3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d (image=, name=loki)
Feb 09 03:17:52 server podman[10961]: loki
Feb 09 03:17:52 server systemd[10378]: Started Loki pod service.
Feb 09 03:17:52 server agent@loki2[10398]: task/module/loki2/71ebac60-4b23-4d2d-970c-5c90182c3f43: create-module/80provision_metrics is starting
Feb 09 03:17:52 server systemd[10378]: Starting Loki server...
Feb 09 03:17:52 server systemd[10378]: Starting Loki frontend HTTP proxy...
Feb 09 03:17:52 server podman[11018]: 2026-02-09 03:17:52.831560003 +0000 UTC m=+0.022481742 image pull 0d9213f68faa286f8e9bc26458dc96933ec30fa25df30b03c704bb383ac98e36 docker.io/grafana/loki:3>
Feb 09 03:17:52 server podman[11034]: 2026-02-09 03:17:52.838169884 +0000 UTC m=+0.018562825 image pull 171247bbf2d7096d7f4ef2d64b3d65275886540d8fbf919b27f559d202c1960d docker.io/traefik:v3.6.7
Feb 09 03:17:53 server agent@loki2[10398]: task/module/loki2/71ebac60-4b23-4d2d-970c-5c90182c3f43: action "create-module" status is "completed" (0) at step 80provision_metrics
Feb 09 03:17:53 server podman[11018]: 2026-02-09 03:17:53.190362271 +0000 UTC m=+0.381284004 volume create loki-server-data
Feb 09 03:17:53 server podman[11018]: 2026-02-09 03:17:53.39587452 +0000 UTC m=+0.586796244 container create 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c (image=docker.io/gr>
Feb 09 03:17:53 server podman[11034]: 2026-02-09 03:17:53.824844386 +0000 UTC m=+1.005237328 container create ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26 (image=docker.io/l>
Feb 09 03:17:55 server systemd[10378]: Started libcrun container.
Feb 09 03:17:55 server podman[11018]: 2026-02-09 03:17:55.477711322 +0000 UTC m=+2.668633049 container init 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c (image=docker.io/gra>
Feb 09 03:17:55 server podman[11018]: 2026-02-09 03:17:55.481592523 +0000 UTC m=+2.672514246 container start 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c (image=docker.io/gr>
Feb 09 03:17:55 server podman[11018]: 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c
Feb 09 03:17:55 server systemd[10378]: Started Loki server.
Feb 09 03:17:55 server systemd[10378]: Started libcrun container.
Feb 09 03:17:55 server loki-server[11098]: level=warn ts=2026-02-09T03:17:55.636257825Z caller=modules.go:769 msg="The config setting shutdown marker path is not set. The /ingester/prepare_shut>
Feb 09 03:17:55 server loki-server[11098]: level=error ts=2026-02-09T03:17:55.661253331Z caller=ratestore.go:109 msg="error getting ingester clients" err="empty ring"
Feb 09 03:17:55 server podman[11034]: 2026-02-09 03:17:55.726256963 +0000 UTC m=+2.906649903 container init ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26 (image=docker.io/lib>
Feb 09 03:17:55 server podman[11034]: 2026-02-09 03:17:55.730276906 +0000 UTC m=+2.910669851 container start ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26 (image=docker.io/li>
Feb 09 03:17:55 server podman[11034]: ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26
Feb 09 03:17:55 server systemd[10378]: Started Loki frontend HTTP proxy.
Feb 09 03:18:26 server agent@loki2[10398]: Handler of cluster/event/default-instance-changed is starting step 10set
Feb 09 03:18:26 server agent@loki2[10398]: Handler of cluster/event/default-instance-changed exited with status "completed" (0) at step 10set
Feb 09 03:18:52 server agent@loki2[10398]: task/module/loki2/6c42f24b-7118-4059-a3d7-bac57b8cd4df: get-configuration/10get is starting
Feb 09 03:18:53 server agent@loki2[10398]: task/module/loki2/6c42f24b-7118-4059-a3d7-bac57b8cd4df: action "get-configuration" status is "completed" (0) at step validate-output.json
Feb 09 03:19:11 server agent@loki2[10398]: task/module/loki2/cd40d4e9-3c30-401c-9c93-ae1c7414161c: get-configuration/10get is starting
Feb 09 03:19:11 server agent@loki2[10398]: task/module/loki2/cd40d4e9-3c30-401c-9c93-ae1c7414161c: action "get-configuration" status is "completed" (0) at step validate-output.json
Feb 09 03:19:15 server agent@loki2[10398]: task/module/loki2/538ca79e-5c11-4415-a303-cb86fb1287ba: get-configuration/10get is starting
Feb 09 03:19:15 server agent@loki2[10398]: task/module/loki2/538ca79e-5c11-4415-a303-cb86fb1287ba: action "get-configuration" status is "completed" (0) at step validate-output.json
Feb 09 03:19:18 server agent@loki2[10398]: task/module/loki2/028ed038-55d5-493d-918d-d7014381e27b: get-configuration/10get is starting
Feb 09 03:19:18 server agent@loki2[10398]: task/module/loki2/028ed038-55d5-493d-918d-d7014381e27b: action "get-configuration" status is "completed" (0) at step validate-output.json
Feb 09 03:19:21 server agent@loki2[10398]: task/module/loki2/89553488-d757-4b62-b7a5-c0c63e42c107: get-configuration/10get is starting
Feb 09 03:19:22 server agent@loki2[10398]: task/module/loki2/89553488-d757-4b62-b7a5-c0c63e42c107: action "get-configuration" status is "completed" (0) at step validate-output.json
Feb 09 03:19:25 server agent@loki2[10398]: task/module/loki2/469f4dfb-8ca3-430d-aea8-c1fd034ef49b: get-configuration/10get is starting
Feb 09 03:19:25 server agent@loki2[10398]: task/module/loki2/469f4dfb-8ca3-430d-aea8-c1fd034ef49b: action "get-configuration" status is "completed" (0) at step validate-output.json
Feb 09 03:19:56 server systemd[10378]: Starting Mark boot as successful...
Feb 09 03:19:56 server systemd[10378]: Finished Mark boot as successful.
Feb 09 03:20:25 server systemd[10378]: Activating special unit Exit the Session...
Feb 09 03:20:25 server systemd[10378]: Stopping libcrun container...
Feb 09 03:20:25 server systemd[10378]: Stopping libcrun container...
Feb 09 03:20:25 server systemd[10378]: Stopping libcrun container...
Feb 09 03:20:25 server systemd[10378]: Stopping podman-pause-69106d23.scope...
Feb 09 03:20:25 server systemd[10378]: Stopped target Main User Target.
Feb 09 03:20:25 server systemd[10378]: Stopping Rootless module/loki2 agent...
Feb 09 03:20:25 server systemd[10378]: Stopping Loki server...
Feb 09 03:20:25 server systemd[10378]: Stopping Loki frontend HTTP proxy...
Feb 09 03:20:25 server systemd[10378]: Stopped Rootless module/loki2 agent.
Feb 09 03:20:25 server systemd[10378]: agent.service: Consumed 11.535s CPU time.
Feb 09 03:20:25 server systemd[10378]: Stopped libcrun container.
Feb 09 03:20:25 server systemd[10378]: Stopped podman-pause-69106d23.scope.
Feb 09 03:20:25 server loki-server[11098]: level=error ts=2026-02-09T03:20:25.674670111Z caller=scheduler_processor.go:111 component=querier msg="error processing requests from scheduler" err=">
Feb 09 03:20:25 server loki-server[11098]: level=error ts=2026-02-09T03:20:25.675903802Z caller=compactor.go:563 msg="failed to run compaction" err="context canceled"
Feb 09 03:20:25 server loki-server[11098]: level=error ts=2026-02-09T03:20:25.676059231Z caller=compactor.go:590 msg="failed to apply retention" err="context canceled"
Feb 09 03:20:25 server loki-server[11098]: level=error ts=2026-02-09T03:20:25.676703667Z caller=scheduler_processor.go:111 component=querier msg="error processing requests from scheduler" err=">
Feb 09 03:20:25 server loki-server[11098]: level=error ts=2026-02-09T03:20:25.67674596Z caller=scheduler_processor.go:111 component=querier msg="error processing requests from scheduler" err="r>
Feb 09 03:20:25 server loki-server[11098]: level=error ts=2026-02-09T03:20:25.676765859Z caller=scheduler_processor.go:111 component=querier msg="error processing requests from scheduler" err=">
Feb 09 03:20:25 server podman[13659]: 2026-02-09 03:20:25.971284842 +0000 UTC m=+0.239009915 container died c704e87e185a7bc7c5eef392a50550fb4848d5d46a07e61f7565b31b28dc29a9 (image=, name=3d815a>
Feb 09 03:20:28 server systemd[10378]: Stopped libcrun container.
Feb 09 03:20:28 server podman[13659]: 2026-02-09 03:20:28.767231651 +0000 UTC m=+3.034956724 container died ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26 (image=docker.io/lib>
Feb 09 03:20:35 server systemd[10378]: libpod-2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c.scope: Stopping timed out. Killing.
Feb 09 03:20:35 server systemd[10378]: libpod-2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c.scope: Killing process 11106 (loki) with signal SIGKILL.
Feb 09 03:20:39 server systemd[10378]: libpod-2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c.scope: Failed with result 'timeout'.
Feb 09 03:20:39 server systemd[10378]: Stopped libcrun container.
Feb 09 03:20:39 server systemd[10378]: libpod-2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c.scope: Consumed 7.590s CPU time.
Feb 09 03:20:39 server systemd[10378]: Removed slice cgroup user-libpod_pod_3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d.slice.
Feb 09 03:20:39 server systemd[10378]: user-libpod_pod_3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d.slice: Consumed 8.043s CPU time.
Feb 09 03:20:39 server systemd[10378]: Removed slice Slice /user.
Feb 09 03:20:39 server systemd[10378]: user.slice: Consumed 8.043s CPU time.
Feb 09 03:20:39 server podman[14454]: 2026-02-09 03:20:39.424792575 +0000 UTC m=+0.024147853 container died 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c (image=docker.io/gra>
Feb 09 03:20:41 server podman[13648]: 2026-02-09 03:20:41.46860428 +0000 UTC m=+15.663956588 container cleanup c704e87e185a7bc7c5eef392a50550fb4848d5d46a07e61f7565b31b28dc29a9 (image=, name=3d8>
Feb 09 03:20:41 server podman[14034]: 2026-02-09 03:20:41.847616474 +0000 UTC m=+13.777949088 container cleanup ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26 (image=docker.io>
Feb 09 03:20:42 server podman[14278]: ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-96e2e536.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-666cef3e.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-dc612f0c.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-ddbc4b2e.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-5aa4c2c3.scope/start is destructive (exit.target has 'start' job queued, but>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-28a53817.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-ce8eee5c.scope/start is destructive (exit.target has 'start' job queued, but>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-2de5d035.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-2f74c0fd.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-68b70a69.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:44 server podman[13659]: time="2026-02-09T03:20:44Z" level=warning msg="Failed to add pause process to systemd sandbox cgroup: Transaction for podman-pause-68b70a69.scope/start is >
Feb 09 03:20:44 server podman[14454]: 2026-02-09 03:20:44.908382199 +0000 UTC m=+5.507737472 container cleanup 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c (image=docker.io/>
Feb 09 03:20:45 server podman[14281]: 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c
Feb 09 03:20:45 server podman[14705]: 2026-02-09 03:20:45.499516525 +0000 UTC m=+1.052666341 pod stop 3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d (image=, name=loki)
Feb 09 03:20:46 server podman[14705]: loki
Feb 09 03:20:48 server podman[14768]: 2026-02-09 03:20:48.392777988 +0000 UTC m=+2.182667845 container remove 2b4d6a9e21348c94b97672efc06c97e4c5d663c3e4dc45e0dd258d9daef4bb1c (image=docker.io/g>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-b1ad5a68.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-962d4310.scope/start is destructive (exit.target has 'start' job queued, but>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-5bcf7ed7.scope/start is destructive (exit.target has 'start' job queued, but>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-0c775ff1.scope/start is destructive (systemd-exit.service has 'start' job qu>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-9ac1445b.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-91630b72.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-dad306b1.scope/start is destructive (systemd-exit.service has 'start' job qu>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-18f1db9e.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-2b56d07b.scope/start is destructive (shutdown.target has 'start' job queued,>
Feb 09 03:20:48 server systemd[10378]: Requested transaction contradicts existing jobs: Transaction for podman-pause-0b13d803.scope/start is destructive (exit.target has 'start' job queued, but>
Feb 09 03:20:48 server podman[14728]: time="2026-02-09T03:20:48Z" level=warning msg="Failed to add pause process to systemd sandbox cgroup: Transaction for podman-pause-0b13d803.scope/start is >
Feb 09 03:20:48 server systemd[10378]: Stopped Loki server.
Feb 09 03:20:49 server podman[14768]: 2026-02-09 03:20:49.579562677 +0000 UTC m=+3.369452536 container remove ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26 (image=docker.io/l>
Feb 09 03:20:52 server podman[14768]: 2026-02-09 03:20:52.302347946 +0000 UTC m=+6.092237802 container remove c704e87e185a7bc7c5eef392a50550fb4848d5d46a07e61f7565b31b28dc29a9 (image=, name=3d81>
Feb 09 03:20:53 server podman[14768]: 2026-02-09 03:20:53.19959818 +0000 UTC m=+6.989488038 pod remove 3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d (image=, name=loki)
Feb 09 03:20:53 server podman[14768]: 3d815a928e8afd965f8d1a49042af4f84a91d03ae531fa357749cd0c1284d59d
Feb 09 03:20:53 server podman[14784]: ae69a2a69d19484b8951d907d90d80997fa31831e42d3d2751c9870ee02f0a26
Feb 09 03:20:53 server systemd[10378]: Stopped Loki pod service.
Feb 09 03:20:53 server systemd[10378]: Stopped Loki frontend HTTP proxy.