2025-08-10T08:27:44+02:00 [1:bugsink3:systemd] podman-pause-e9836742.scope: unit configures an IP firewall, but not running as root.
2025-08-10T08:27:44+02:00 [1:bugsink3:systemd] (This warning is only shown for the first unit using IP firewalling.)
2025-08-10T08:28:57+02:00 [1:bugsink3:systemd] Started Podman web-app.service.
2025-08-10T08:28:57+02:00 [1:bugsink3:web-app] monofy starting with pid 1
2025-08-10T08:28:57+02:00 [1:bugsink3:web-app] monofy 'pre-start' process: bugsink-manage check --deploy --fail-level WARNING
2025-08-10T08:28:57+02:00 [1:bugsink3:mysql-app] 2025-08-10 06:28:57+00:00 [Note] [Entrypoint]: Initializing database files
2025-08-10T08:28:57+02:00 [1:bugsink3:mysql-app] 2025-08-10T06:28:57.451227Z 0 [System] [MY-015017] [Server] MySQL Server Initialization - start.
2025-08-10T08:28:57+02:00 [1:bugsink3:mysql-app] 2025-08-10T06:28:57.454190Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 9.4.0) initializing of server in progress as process 75
2025-08-10T08:28:57+02:00 [1:bugsink3:mysql-app] 2025-08-10T06:28:57.464571Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2025-08-10T08:28:57+02:00 [1:bugsink3:mysql-app] 2025-08-10T06:28:57.880141Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2025-08-10T08:28:58+02:00 [1:bugsink3:web-app] System check identified no issues (3 silenced).
2025-08-10T08:28:58+02:00 [1:bugsink3:web-app] monofy 'pre-start' process: bugsink-manage migrate
2025-08-10T08:28:59+02:00 [1:bugsink3:mysql-app] 2025-08-10T06:28:59.189519Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2025-08-10T08:28:59+02:00 [1:bugsink3:web-app] Traceback (most recent call last):
2025-08-10T08:28:59+02:00 [1:bugsink3:web-app] File "/usr/local/lib/python3.12/site-packages/django/db/backends/base/base.py", line 288, in ensure_connection
2025-08-10T08:28:59+02:00 [1:bugsink3:web-app] self.connect()
2025-08-10T08:28:59+02:00 [1:bugsink3:web-app] File "/usr/local/lib/python3.12/site-packages/django/utils/asyncio.py", line 26, in inner
2025-08-10T08:28:59+02:00 [1:bugsink3:web-app] return func(*args, **kwargs)
2025-08-10T08:28:59+02:00 [1:bugsink3:web-app] ^^^^^^^^^^^^^^^^^^^^^
2025-08-10T08:28:59+02:00 [1:bugsink3:web-app] File "/usr/local/lib/python3.12/site-packages/django/db/backends/base/base.py", line 269, in connect
seems to run as intended, created team and project, login/out
Thanks for helping test.
How is Backup and restore coming along on the same?
and for developers in the house coming along this, Bugsink uses the same SDK as Sentry, so you can also pipe apps error logs and bugs to it, if you were using sentry
2025-08-10T17:01:11+02:00 [1:bugsink3:agent@bugsink3] mysqldump: [Warning] Using a password on the command line interface can be insecure.
2025-08-10T17:01:12+02:00 [1:bugsink3:agent@bugsink3] restic snapshots
2025-08-10T17:01:12+02:00 [1:bugsink3:agent@bugsink3] Trying to pull ghcr.io/nethserver/restic:3.9.2...
2025-08-10T17:01:12+02:00 [1:bugsink3:agent@bugsink3] Getting image source signatures
2025-08-10T17:01:12+02:00 [1:bugsink3:agent@bugsink3] Copying blob sha256:2246b04badcba6c8a7d16e25fade69c25c34ce7d8ff8726511b2d85121150216
2025-08-10T17:01:12+02:00 [1:bugsink3:agent@bugsink3] Copying blob sha256:b0f6f1c319a1570f67352d490370f0aeb5c0e67a087baf2d5f301ad51ec18858
2025-08-10T17:01:15+02:00 [1:bugsink3:agent@bugsink3] Copying config sha256:363b1008aad8b214eac80680f5f3a721137c9465f0bd9c47548a894bf8a99ec0
2025-08-10T17:01:15+02:00 [1:bugsink3:agent@bugsink3] Writing manifest to image destination
2025-08-10T17:01:15+02:00 [1:bugsink3:systemd] Started libcrun container.
2025-08-10T17:01:15+02:00 [1:bugsink3:agent@bugsink3] Fatal: repository does not exist: unable to open config file: <config/> does not exist
2025-08-10T17:01:15+02:00 [1:bugsink3:agent@bugsink3] rclone::webdav:/bugsink/d6e45aa1-2b0c-4267-b16b-e0a0083b0d05
2025-08-10T17:01:15+02:00 [1:bugsink3:agent@bugsink3] Is there a repository at the following location?
2025-08-10T17:01:15+02:00 [1:bugsink3:agent@bugsink3] Initializing repository 86d1a8ac-ef89-557a-8e19-8582ab86b7c4 at path bugsink/d6e45aa1-2b0c-4267-b16b-e0a0083b0d05
2025-08-10T17:01:15+02:00 [1:bugsink3:agent@bugsink3] restic init
2025-08-10T17:01:15+02:00 [1:bugsink3:systemd] Started libcrun container.
2025-08-10T17:01:18+02:00 [1:bugsink3:systemd] libpod-3b128ecf136b88800c9cd9287bf3cc6628c0ed38c0c76b53a74fcae7d74218c3.scope: Consumed 2.358s CPU time.
2025-08-10T17:01:18+02:00 [1:bugsink3:agent@bugsink3] restic backup --json state/environment --files-from=/etc/state-include.conf
2025-08-10T17:01:18+02:00 [1:bugsink3:systemd] Started libcrun container.
2025-08-10T17:01:20+02:00 [1:bugsink3:systemd] libpod-cd32ed9f5f9ac4a758f05b31371771d544a085eca12ab9e7fb35657a669b8d4f.scope: Consumed 4.319s CPU time.
2025-08-10T17:01:20+02:00 [1:bugsink3:agent@bugsink3] restic forget --prune --keep-last=5
2025-08-10T17:01:20+02:00 [1:bugsink3:systemd] Started libcrun container.
2025-08-10T17:01:21+02:00 [1:bugsink3:agent@bugsink3] restic stats --json latest
2025-08-10T17:01:21+02:00 [1:bugsink3:systemd] Started libcrun container.
2025-08-10T17:01:23+02:00 [1:bugsink3:systemd] Started libcrun container.
2025-08-10T17:01:23+02:00 [1:bugsink3:agent@bugsink3] 2025/08/10 15:01:23 NOTICE: webdav root 'bugsink': --checksum is in use but the source and destination have no hashes in common; falling back to --size-only
2025-08-10T17:01:23+02:00 [1:bugsink3:agent@bugsink3] task/module/bugsink3/0cc83873-bd8b-4b01-94ea-4e93eb9be691: action "run-backup" status is "completed" (0) at step 50run_backup
Restore (replace existing) spins at 67% indefinitely in software center. 2 instances still available where the resore one in progress is inactive.
Thanks, I already removed the bugsink instances, so any trace of them is gone. Except for the task that is still coming back (after refresh page/login/out) even tho all bugsink instances are gone.
Is there a way to list (ongoing) tasks, and to whom (who/what is managing tasks) they belong? Something is queing something and the queue still contains that unwanted task
As you can see in the following image the db sucessfully restores that means the issue is in the configure traefik on the restore module. @mrmarkuz Any hint on how to handle this