Missing logs in NS8

It seems as if there hasn’t been a log file for a long time.

Maybe this solved topic about missing logs is helpful:

2 Likes

Thank you, @mrmarkuz, for pointing out this solution.
Unfortunately, this does not lead to success. There is still no log available.
(Not even after a restart.)


root@daho-ns8:~# journalctl -r _UID=$(id -u loki1)
Feb 20 09:31:50 daho-ns8 systemd[863]: Stopped loki-server.service - Loki server.
Feb 20 09:31:50 daho-ns8 systemd[863]: loki-server.service: Scheduled restart job, restart counter is at 77.
Feb 20 09:31:50 daho-ns8 systemd[863]: loki-server.service: Failed with result 'exit-code'.
Feb 20 09:31:50 daho-ns8 podman[13596]: 24aff3dbd40ff568c760a75b4d1c1956faad5bbe1406f186914254ac6e7f95c2
Feb 20 09:31:50 daho-ns8 podman[13596]: 2025-02-20 09:31:50.609964681 +0100 CET m=+0.195021301 container remo>
Feb 20 09:31:50 daho-ns8 systemd[863]: loki-server.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:31:50 daho-ns8 podman[13583]: 2025-02-20 09:31:50.385318852 +0100 CET m=+0.119853910 container clea>
Feb 20 09:31:50 daho-ns8 podman[13583]: 2025-02-20 09:31:50.309712032 +0100 CET m=+0.044247185 container died>
Feb 20 09:31:50 daho-ns8 loki-server[13571]:   line 68: field auth_enabled already set in type loki.ConfigWra>
Feb 20 09:31:50 daho-ns8 loki-server[13571]: failed parsing config: /etc/loki/local-config.yaml: yaml: unmars>
Feb 20 09:31:50 daho-ns8 podman[13555]: Copying blob sha256:52c2629c0bf58ad3dc341bf9371d4eecafaf4fa95aea59566>
Feb 20 09:31:50 daho-ns8 podman[13555]: Copying blob sha256:8d7fd8d98c42b89dbf61e983737dacf1084553409525063e2>
Feb 20 09:31:50 daho-ns8 podman[13555]: Copying blob sha256:da9db072f522755cbeb85be2b3f84059b70571b229512f157>
Feb 20 09:31:50 daho-ns8 podman[13555]: Copying blob sha256:2e3a061ee62b1018ddac5f37e94d0389de5551f6677b08763>
Feb 20 09:31:50 daho-ns8 podman[13555]: Getting image source signatures
Feb 20 09:31:49 daho-ns8 systemd[863]: Started loki-server.service - Loki server.
Feb 20 09:31:49 daho-ns8 podman[13546]: 24aff3dbd40ff568c760a75b4d1c1956faad5bbe1406f186914254ac6e7f95c2
Feb 20 09:31:49 daho-ns8 podman[13546]: 2025-02-20 09:31:49.515076526 +0100 CET m=+0.371799216 container star>
Feb 20 09:31:49 daho-ns8 podman[13546]: 2025-02-20 09:31:49.440184402 +0100 CET m=+0.296907572 container init>
Feb 20 09:31:49 daho-ns8 systemd[863]: Started libpod-24aff3dbd40ff568c760a75b4d1c1956faad5bbe1406f186914254a>
Feb 20 09:31:49 daho-ns8 podman[13546]: 2025-02-20 09:31:49.349894375 +0100 CET m=+0.206617052 container crea>
Feb 20 09:31:49 daho-ns8 podman[13546]:
Feb 20 09:31:49 daho-ns8 podman[13555]: Trying to pull docker.io/library/traefik:v2.11.13...
Feb 20 09:31:49 daho-ns8 podman[13546]: 2025-02-20 09:31:49.178429168 +0100 CET m=+0.035151840 image pull  do>
Feb 20 09:31:49 daho-ns8 systemd[863]: Starting traefik.service - Loki frontend HTTP proxy...
Feb 20 09:31:49 daho-ns8 systemd[863]: Starting loki-server.service - Loki server...
Feb 20 09:31:49 daho-ns8 systemd[863]: Started loki.service - Loki pod service.
Feb 20 09:31:49 daho-ns8 podman[13509]: 03243f2dcf779a4c54a509ec060a4ff01182e889649cb4871c3322d346230cfc
Feb 20 09:31:49 daho-ns8 podman[13509]: 2025-02-20 09:31:49.078330621 +0100 CET m=+0.179954251 pod start 0324>
Feb 20 09:31:49 daho-ns8 podman[13509]: 2025-02-20 09:31:49.078197714 +0100 CET m=+0.179821342 container star>
Feb 20 09:31:49 daho-ns8 podman[13509]: 2025-02-20 09:31:49.056816168 +0100 CET m=+0.158439798 container init>
Feb 20 09:31:49 daho-ns8 systemd[863]: Started libpod-8d439ac26326bf4cc4320848577f4ca508784d800e3762ba134dec0>
Feb 20 09:31:48 daho-ns8 podman[13498]: 03243f2dcf779a4c54a509ec060a4ff01182e889649cb4871c3322d346230cfc
Feb 20 09:31:48 daho-ns8 podman[13498]: 2025-02-20 09:31:48.868193827 +0100 CET m=+0.253301157 pod create 032>
lines 1-34...skipping...

That doesn’t help either.

systemctl restart promtail

I think this is the relevant error message:

Feb 20 09:31:50 daho-ns8 loki-server[13571]:   line 68: field auth_enabled already set in type loki.ConfigWrapper
Feb 20 09:31:50 daho-ns8 loki-server[13571]: failed parsing config: /etc/loki/local-config.yaml: yaml: unmarshal errors:

Please check the file /etc/loki/local-config.yaml:

runagent -m loki1 cat ../loki-config.yaml

Maybe use a yaml checker: https://www.yamllint.com/

My local-config.yaml to compare

This text will be hidden

auth_enabled: false

analytics:
  reporting_enabled: false
  usage_stats_url: ""

server:
  http_listen_port: 3100

ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed
  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h
  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  max_transfer_retries: 0     # Chunk transfers disabled
  wal:
    dir: "/tmp/wal"

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h

storage_config:
  boltdb_shipper:
    active_index_directory: /loki/boltdb-shipper-active
    cache_location: /loki/boltdb-shipper-cache
    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
    shared_store: filesystem
  filesystem:
    directory: /loki/chunks

compactor:
  working_directory: /loki/boltdb-shipper-compactor
  shared_store: filesystem
  retention_enabled: true
  retention_delete_delay: 30m
  delete_request_store: filesystem

limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  retention_period: ${LOKI_RETENTION_PERIOD:-365}d

chunk_store_config:
  max_look_back_period: 0s

ruler:
  storage:
    type: local
    local:
      directory: /loki/rules
  rule_path: /loki/rules-temp
  alertmanager_url: http://localhost:9093
  ring:
    kvstore:
      store: inmemory
  enable_api: true

There’s also the “reinstall loki” option:

2 Likes

One deviation is immediately apparent and is also criticized by the Yaml checker:
The entires…

auth_enabled: false

analytics:
  reporting_enabled: false
  usage_stats_url: ""

…are existing twice at the end of the file

my one

root@daho-ns8:~# runagent -m loki1 cat …/loki-config.yaml
auth_enabled: false

server:
http_listen_port: 3100

ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled
wal:
dir: “/tmp/wal”

schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h

storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: filesystem
filesystem:
directory: /loki/chunks

compactor:
working_directory: /loki/boltdb-shipper-compactor
shared_store: filesystem
retention_enabled: true
retention_delete_delay: 30m
delete_request_store: filesystem

limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
retention_period: ${LOKI_RETENTION_PERIOD:-365}d

chunk_store_config:
max_look_back_period: 0s

ruler:
storage:
type: local
local:
directory: /loki/rules
rule_path: /loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true

auth_enabled: false

analytics:
reporting_enabled: false
usage_stats_url: “”

I removed this doubled lines and restarted the services.

:~# runagent -m loki1 systemctl --user restart loki-server
:~# systemctl restart promtail

Voila, the logs are back again. The system has also survived a reboot.

Thank you for guiding me to the solution.

1 Like

and now the “stats.grafana.org” Problem is back.

I added …

analytics:
  reporting_enabled: false
  usage_stats_url: ""

…again and only left out auth_enabled: false, because it is already in the first line