I have a Proxmox 5.2 KVM template with Nethserver 7.5 wich is giving this journal output:
# journalctl -x | egrep -i 'warning|error|fail|unable'
Sep 27 08:06:45 ns75-template.local.durerocaribe.cu kernel: acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Sep 27 08:06:45 ns75-template.local.durerocaribe.cu kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Sep 27 08:06:50 ns75-template.local.durerocaribe.cu lvm[594]: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Sep 27 08:06:52 ns75-template.local.durerocaribe.cu augenrules[629]: failure 1
Sep 27 08:06:52 ns75-template.local.durerocaribe.cu augenrules[629]: failure 1
Sep 27 08:06:53 ns75-template.local.durerocaribe.cu systemd[1]: Dependency failed for Network Manager Wait Online.
-- Subject: Unit NetworkManager-wait-online.service has failed
-- Unit NetworkManager-wait-online.service has failed.
Sep 27 08:06:53 ns75-template.local.durerocaribe.cu systemd[1]: Job NetworkManager-wait-online.service/start failed with result 'dependency'.
This template is for quick deploying a NS 7.5 fresh install, so Iām concerned that itās giving me this warnings and fails, this repeats with every boot
Yes, this two only happens on boot. Could you tell me the meaning of this and/or why they happen, please?
Same as before, knowledge is always welcome.
No I havenāt, but this NS instance was created using the [ Backup Configuration], maybe I could search for āno_cache.aclā to see which template has it. As for the shorewall I havenāt look at those files, when I get back to work I will post some insides about it.
New LVM disks that appear on the system must be scanned before lvmetad knows about them. If lvmetad does not know about a disk, then LVM commands using lvmetad will also not know about it. When disks are added or removed from the system, lvmetad must be updated
# cat /etc/audit/audit.rules
## This file is automatically generated from /etc/audit/rules.d
-D
-b 8192
-f 1
Content of rules.d/audit.rules
## First rule - delete all
-D
## Increase the buffers to survive stress events.
## Make this bigger for busy systems
-b 8192
## Set failure mode to syslog
-f 1
Iām still seen what audit rules are for, but it seems to me this log warning is harmless as you told me, just to be sure, @mrmarkuz could you please create a NS7.5 instance in your Proxmox, Iām using the ISO with this checksums
Sep 27 08:06:50 ns75-template.local.durerocaribe.cu lvm[594]: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
This give me some inside:
# systemctl status lvm2*
ā lvm2-lvmetad.service - LVM2 metadata daemon
Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: enabled)
Active: active (running) since Tue 2018-10-02 19:24:48 CDT; 8min ago
Docs: man:lvmetad(8)
Main PID: 533 (lvmetad)
CGroup: /system.slice/lvm2-lvmetad.service
āā533 /usr/sbin/lvmetad -f
Oct 02 19:24:48 heimdall.dcserver.local systemd[1]: Started LVM2 metadata daemon.
Oct 02 19:24:48 heimdall.dcserver.local systemd[1]: Starting LVM2 metadata daemon...
ā lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2018-10-02 19:24:49 CDT; 8min ago
Docs: man:dmeventd(8)
man:lvcreate(8)
man:lvchange(8)
man:vgchange(8)
Process: 514 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS)
Main PID: 514 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/lvm2-monitor.service
Oct 02 19:24:49 heimdall.dcserver.local lvm[514]: 2 logical volume(s) in volume group "VolGroup" monitored
Oct 02 19:24:49 heimdall.dcserver.local systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
ā lvm2-pvscan@8:2.service - LVM2 PV scan on device 8:2
Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
Active: active (exited) since Tue 2018-10-02 19:24:50 CDT; 8min ago
Docs: man:pvscan(8)
Process: 597 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i (code=exited, status=0/SUCCESS)
Main PID: 597 (code=exited, status=0/SUCCESS)
Oct 02 19:24:48 heimdall.dcserver.local systemd[1]: Starting LVM2 PV scan on device 8:2...
Oct 02 19:24:48 heimdall.dcserver.local lvm[597]: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Oct 02 19:24:50 heimdall.dcserver.local lvm[597]: 2 logical volume(s) in volume group "VolGroup" now active
Oct 02 19:24:50 heimdall.dcserver.local systemd[1]: Started LVM2 PV scan on device 8:2.
According to this output it seems that lvmetad is instanced twice and both processes tries to access
device 8:2, which causes a warning. This is an assumption of mine, do you agreed @mrmarkuz?
If you had them only once, and LVM is working good, ignore them.
You created some bad rules inside the firewall page and Shorewall tries to optimize them during compiling time.
You need to check your config and search for conflicting rues.