Help with logs error and warnings

NethServer Version: 7.5.1804 (final)

I have a Proxmox 5.2 KVM template with Nethserver 7.5 wich is giving this journal output:

# journalctl -x | egrep -i 'warning|error|fail|unable'
Sep 27 08:06:45 ns75-template.local.durerocaribe.cu kernel: acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Sep 27 08:06:45 ns75-template.local.durerocaribe.cu kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Sep 27 08:06:50 ns75-template.local.durerocaribe.cu lvm[594]: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Sep 27 08:06:52 ns75-template.local.durerocaribe.cu augenrules[629]: failure 1
Sep 27 08:06:52 ns75-template.local.durerocaribe.cu augenrules[629]: failure 1
Sep 27 08:06:53 ns75-template.local.durerocaribe.cu systemd[1]: Dependency failed for Network Manager Wait Online.
-- Subject: Unit NetworkManager-wait-online.service has failed
-- Unit NetworkManager-wait-online.service has failed.
Sep 27 08:06:53 ns75-template.local.durerocaribe.cu systemd[1]: Job NetworkManager-wait-online.service/start failed with result 'dependency'.

This template is for quick deploying a NS 7.5 fresh install, so Iā€™m concerned that itā€™s giving me this warnings and fails, this repeats with every boot

Only thing I find about it is to disable ASPM at the BIOS.

Hello @m.traeumner, thanks for helping out. Iā€™m concerned about this three ones

Sep 27 08:06:50 ns75-template.local.durerocaribe.cu lvm[594]: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Sep 27 08:06:52 ns75-template.local.durerocaribe.cu augenrules[629]: failure 1
Sep 27 08:06:52 ns75-template.local.durerocaribe.cu augenrules[629]: failure 1

Also this ones are appearing on my gateway:

Oct 01 18:44:10 heimdall.dcserver.local squid[1457]: 2018/10/01 18:44:10| Warning: empty ACL: acl no_cache dstdomain "/etc/squid/acls/no_cache.acl"
Oct 01 18:44:11 heimdall.dcserver.local shorewall[1174]: WARNING: One or more unreachable rules in chain loc2fw have been discarded /etc/shorewall/rules (line 127)
Oct 01 18:44:13 heimdall.dcserver.local shorewall[1174]: WARNING: 23@/etc/firehol/fireqos.conf: class:
Oct 01 18:44:13 heimdall.dcserver.local shorewall[1174]: WARNING: 23@/etc/firehol/fireqos.conf: class:
Oct 01 18:44:15 heimdall.dcserver.local suricata[1198]: 1/10/2018 -- 18:44:15 - <Warning> - [ERRCODE: SC_ERR_EVENT_ENGINE(210)] - can't suppress sid 2011124, gid 1: unknown rule

If these are one timers (on boot) I would consider them harmless.

This error is harmless but can not be supressed.

It seems you have a misconfiguration in firewall and proxy rules, do you use some custom firewall template?

Yes, this two only happens on boot. Could you tell me the meaning of this and/or why they happen, please?

Same as before, knowledge is always welcome.

No I havenā€™t, but this NS instance was created using the [ Backup Configuration], maybe I could search for ā€˜no_cache.aclā€™ to see which template has it. As for the shorewall I havenā€™t look at those files, when I get back to work I will post some insides about it.

lvmetad:

New LVM disks that appear on the system must be scanned before lvmetad knows about them. If lvmetad does not know about a disk, then LVM commands using lvmetad will also not know about it. When disks are added or removed from the system, lvmetad must be updated

More info:

https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org/thread/BCVOB7NVHU6ZTFA4DGB7BS42NX7ZANVA/

man lvmetad

augenrules:

Itā€™s about audit rules. Does it work if you execute augenrules on command line?

suricata:

Some information about this error:

1 Like
# augenrules
/usr/sbin/augenrules: No change

As for its journal this is the output:

# journalctl -x -u auditd.service | cat
-- Logs begin at Mon 2018-10-01 18:55:33 CDT, end at Tue 2018-10-02 15:41:22 CDT. --
Oct 01 18:55:37 heimdall.dcserver.local systemd[1]: Starting Security Auditing Service...
-- Subject: Unit auditd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit auditd.service has begun starting up.
Oct 01 18:55:37 heimdall.dcserver.local auditd[625]: Started dispatcher: /sbin/audispd pid: 627
Oct 01 18:55:37 heimdall.dcserver.local auditd[625]: Init complete, auditd 2.8.1 listening for events (startup state enable)
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: /sbin/augenrules: No change
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: No rules
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: enabled 1
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: failure 1
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: pid 625
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: rate_limit 0
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: backlog_limit 8192
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: lost 0
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: backlog 1
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: enabled 1
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: failure 1
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: pid 625
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: rate_limit 0
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: backlog_limit 8192
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: lost 0
Oct 01 18:55:37 heimdall.dcserver.local augenrules[674]: backlog 1
Oct 01 18:55:37 heimdall.dcserver.local systemd[1]: Started Security Auditing Service.
-- Subject: Unit auditd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit auditd.service has finished starting up.
-- 
-- The start-up result is done.

This are its related files:

# ls -lh /etc/audit/*
-rw-r-----. 1 root root 805 Aug 16 10:39 /etc/audit/auditd.conf
-rw-r-----. 1 root root  81 Sep 21 15:54 /etc/audit/audit.rules
-rw-r-----. 1 root root 127 Aug 16 10:39 /etc/audit/audit-stop.rules

/etc/audit/rules.d:
total 4.0K
-rw-------. 1 root root 163 Sep 21 15:48 audit.rules

Content of audit.rules

# cat /etc/audit/audit.rules 
## This file is automatically generated from /etc/audit/rules.d
-D
-b 8192
-f 1

Content of rules.d/audit.rules

## First rule - delete all
-D

## Increase the buffers to survive stress events.
## Make this bigger for busy systems
-b 8192

## Set failure mode to syslog
-f 1

Iā€™m still seen what audit rules are for, but it seems to me this log warning is harmless as you told me, just to be sure, @mrmarkuz could you please create a NS7.5 instance in your Proxmox, Iā€™m using the ISO with this checksums

SHA1 cdb9e302d563d5abb500286946e88e33ec81058d
MD5 002228c20d0702b98568aff67319d5eb

After installationā€™s done, I get those logs by running:

# journalctl -x | egrep -i 'warning|error|fail|unable'

As for the lvm warning

Sep 27 08:06:50 ns75-template.local.durerocaribe.cu lvm[594]: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.

This give me some inside:

# systemctl status lvm2*
ā— lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: enabled)
   Active: active (running) since Tue 2018-10-02 19:24:48 CDT; 8min ago
     Docs: man:lvmetad(8)
 Main PID: 533 (lvmetad)
   CGroup: /system.slice/lvm2-lvmetad.service
           ā””ā”€533 /usr/sbin/lvmetad -f

Oct 02 19:24:48 heimdall.dcserver.local systemd[1]: Started LVM2 metadata daemon.
Oct 02 19:24:48 heimdall.dcserver.local systemd[1]: Starting LVM2 metadata daemon...

ā— lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: active (exited) since Tue 2018-10-02 19:24:49 CDT; 8min ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
  Process: 514 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS)
 Main PID: 514 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/lvm2-monitor.service

Oct 02 19:24:49 heimdall.dcserver.local lvm[514]: 2 logical volume(s) in volume group "VolGroup" monitored
Oct 02 19:24:49 heimdall.dcserver.local systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.

ā— lvm2-pvscan@8:2.service - LVM2 PV scan on device 8:2
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
   Active: active (exited) since Tue 2018-10-02 19:24:50 CDT; 8min ago
     Docs: man:pvscan(8)
  Process: 597 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i (code=exited, status=0/SUCCESS)
 Main PID: 597 (code=exited, status=0/SUCCESS)

Oct 02 19:24:48 heimdall.dcserver.local systemd[1]: Starting LVM2 PV scan on device 8:2...
Oct 02 19:24:48 heimdall.dcserver.local lvm[597]: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Oct 02 19:24:50 heimdall.dcserver.local lvm[597]: 2 logical volume(s) in volume group "VolGroup" now active
Oct 02 19:24:50 heimdall.dcserver.local systemd[1]: Started LVM2 PV scan on device 8:2.

According to this output it seems that lvmetad is instanced twice and both processes tries to access
device 8:2, which causes a warning. This is an assumption of mine, do you agreed @mrmarkuz?

Iā€™ll try and reportā€¦

Yes, it seems like it has to be waited for lvm to be ready. A scan only makes sense when lvm is fully updated.

1 Like

As usual, thanks for helping out @mrmarkuz, I never heard of auditd until now, great tool by the way.

1 Like

Tested the ISO with the checksum you provided in a fresh proxmox vm and I got the same augenrules ā€œerrorā€ but it should do no harm.

Oct 7 00:01:09 localhost augenrules: failure 1

1 Like

You can ignore these messages in the first post.

If you had them only once, and LVM is working good, ignore them.

You created some bad rules inside the firewall page and Shorewall tries to optimize them during compiling time.
You need to check your config and search for conflicting rues.

3 Likes