I have configured two NethSecurity 8 devices in HA, and I’ve noticed that in the limitations, it states that you cannot use virtual VLANs, only physical interfaces.
Is this a limitation because HA is in beta, or is it a limitation of OpenWrt itself?
I have configured two NethSecurity 8 devices in HA, and I’ve noticed that in the limitations, it states that you cannot use virtual VLANs, only physical interfaces.
Is this a limitation because HA is in beta, or is it a limitation of OpenWrt itself?
It’s a limitation of the setup script that aims to simplify the configuration for the user.
I do not know if we are going to support it in the future.
OpenWrt itself does not have this limitation because it does not support the network configuration at all: you must keep it in synch manually.
Can I configure it manually? If so, will it interfere with any future upgrades?
You can do almost anything supported by keepalived using uci command, but you will need to take core of the config on both nodes.
It’s quite likely ![]()
If you want to try adding support for VLAN on logic interfaces, you should dig inside the following scripts:
Thanks Giacomo, last question…
If I configure everything without using the Nethesis scripts (from the command line), do you still think subsequent updates will interfere?
(PS: In my opinion, 99% of companies that need HA have at least one management network, one green network, two WANs, one guest Wi-Fi network, and one production network… Add time marker, NVRs… I think it’d be difficult to dedicate an interface to each VLAN in companies that might need HA…@alefattorini What do you think about it commercially point of view?)
As long as you do not use the same names for the configuration sections, you should not have any problem.
You can create multiple VLANs on the same ethernet interface.
We also thinking on a way to avoid the current limitation. An idea could be to avoid the automatic configuration and let the user configure manually the network.
Hi Giacomo, I’ve manually configured everything without using the Nethesis script, and it all works, even with virtual VLANs. I can’t see the status on the dashboard, but that’s fine.
I’ve put the configuration file in a folder in the root directory and modified the /etc/config/keepalived file like this:
config globals 'globals'
option alt_config_file "/ha/mykeepalived.conf"
Could you please confirm whether future updates will leave the /etc/config/keepalived file untouched?
Thanks in advance
Riccardo
I’m glad you fixed and have a working configuration!
Could you please confirm whether future updates will leave the /etc/config/keepalived file untouched?
No I can’t ![]()
That’s quite likely we are going to modify it. If we can keep the config inside UCI, I think we are not going to change the alt_config_file option but do no get it for granted.
Thank you, I understand. I won’t use it to avoid any risk.
I hope you will integrate it into future versions!
Thanks for the support!!!
@giacomo I saw that the new NethSecurity update (8.7.1) has been released. I noticed in the release information that it now includes support for VLANs. Can you please confirm if this is correct?
Also, I didn’t see PPPoE mentioned in the manual. Is PPPoE supported as well?
Thanks!
Both VLANs and PPPoE are supported.
OMG, with Wireguard, vlan and pppoe on HA, Nethsecurity is becoming the top!!!
Nice work devs!
Hello Devs, I am currently testing High Availability.
Everything seems to be working correctly with the HA setup and the VRRP configuration, except for VLAN interfaces that originated from an NS7 migration.
The issue occurs when the VRRP role shifts back to the original Primary/Master node.
I have two types of VLANs on the same physical interface (eth0):
Migrated VLAN: eth0.10 (VLAN ID 10, migrated from NS7)
New VLAN: eth0.55 (VLAN ID 55, created directly on NS8)
VLAN eth0.55 works perfectly. Failover and failback cycles are stable.
VLAN eth0.10 causes instability on failback.
When the current Master fails and the Backup node takes over (becoming the new Master), the failover works fine. However, when the original Master recovers and tries to reclaim the Master role (failback), it enters a loop: it becomes Master for a brief moment and then immediately enters the Fault state again.
This is because the eth0.10 VLAN interface goes down temporarily during the transition, causing the Keepalived check to fail, which triggers the Fault state.
I have already tried the following, without success:
Recreating the eth0.10 interface.
Changing the network zone of the eth0.10 interface.
If I create a new VLAN on the same physical interface (eth0) but use a different VLAN ID (e.g., creating eth0.100 instead of eth0.10), the issue disappears.
This leads me to believe the problem is specifically related to the VLAN ID 10 being carried over from the NS7 migration, possibly due to a persistent or residual configuration conflict related to that specific ID.
Any suggestions?
Thank you for your help!
I do not have many ideas but I can give u a couple of general suggestions:
BINGO!
In firewall 1 master, which comes from a migration, the command:
uci show network | grep eth0.10
Gave:
network.vlan10a22f4.name='eth0.10'
network.ns_ee34293d.name='eth0.10'
network.guest1.device='eth0.10'
The firewall 2 slave:
uci show network | grep eth0.10
network.ns_240b79cd.name='eth0.10'
network.guest1.device='eth0.10'
So, I manually deleted both the original interface configuration and the VLAN configuration for eth0.10 from NS8 GUI, and than I manually removing the extra name:
uci delete network.vlan10a22f4
uci commit network
And the problem is gone!!
PS:
I tried removing the name before recreating the interface and vlan but it didn’t work and the VRRP traffic was already on a physical interface.