PPPoE speed issue

AFAIK QoS is meant to don’t boost up everything, but keep data flow smoothly according to rules.
And also QoS chunks some CPU power, even PPPoE does, also adding a bit of network overhead due to encapsulation.
How many vCores does your Proxmox can use?
Does the pCore increase occupation when using PPPoE or QoS? Are you reserving any CPU share to this host?

I’ve noticed the pppoe process taking up to 75% of a single core while speedtest was running, i’ve increased from 6 to 12 vcore but the process keep using only a single core it seems.

The host has 2 CPU Intel Xeon E5-2620 @ 2.00GHz, with a total of 24 vcore, it’s a homelab server, and basically, it’s idle while testing ( total cpu usage of the host around 2% while not speedtesting ), the only other vm on atm it’s the other “router” with pfsense, wich not show the pppoe issue.

Because it’s not multithreaded, so load cannot be split among cores. At least, AFAIK.
Maybe Fritz!Box RISC SoC handles load better for that task.

Have public IP address on RED is quite nice for some things, but there’s no downside into using PPPoE done by Fritz!Box and do a nice port forwarding.
Does your connection has a public static ip?

I prefer the pppoe, to have a public ip on red and avoid double nat.

i’ve already done a dmz on fritz to pass all the ports, and it’s working but i would like to understand why the pppoe problem show only on nethserver;

how can pfsense(freebsd based) and ipfire (based on linux) do not have the same issue ?

Also, distro have different goals.
PfSense is pure networking oriented. Nethserver is quite closer to a multifunction server. Which can also be network gateway with some power-features.

idd, that’s why i wanted to test nethserver, and use it as replacement for gateway ad and mail, but atm i guess i’ve to stick with pfsense until this problem is fixed.

Thanks :slight_smile:

Or keep PFSense as Firewall and use NethServer as a Mailserver (maybe into Orange Zone? :wink: )

Ye, but i was testing NS not just for me but, for my customers, for those small business that now have 3/4 phisical appliance/server (router, utm, dc ), i was planning to replace with a single server with NS installed bare metal, to act as router, gateway, mail server and so on.

I know it’s a single point of failure, but sometimes, for example on a small office with few client they don’t wanna spend money on multiple appliance / server :wink:

pfSense had this kind of issue before on virtualisation such as ESX and Proxmox than pfSense came with the Disable hardware checksum offload option.

@dev_team does Nethserver have the same kind of behavior/option ?

But also, in proxmox, you could compare the type interface used by pfsense vs nethserver and play with this option

And as @pike said, QOS probably make a difference

Ye, i had that issue back in time, but if i remember right it was a totally different issue, i mean, it took ages to just open the login screen, i guess we’re facing here another type of issue, related to pppoe, the offload issue present on pfsense, where higly noticeable, here i see the issue only if my ISP provide > 150Mbit of bandwith.

Ofc, if i let my ISP router do the PPPoE the problem disapper, and the red eth interface configured as static private ip, can reach 190Mbit/s, no matter if QoS it’s enabled or not.

Changing the interface from VirtIO to E1000, didn’t make noticeable difference, still the PPPoE process eat like 80/90% of 1 vCPU.

Thanks, Regards.

A similar past issue was solved by using a new install + virtIO driver, but you are using virtIO already:

Yes, GRO and GSO can be disabled, using the ethtool command or the relevant configuration files.
An example command to make settings permanent on NethServer could be (change enpXXX):

db networks setprop enpXXX ethtool_opts "\"-K \${DEVICE} tso off\""

Experiment with “tso”, “gro” and “gso”. You could set them all.

1 Like

Hello guys, sorry for the late reply.

So, i’ve switch back the eth0 to pppoe and used ethtool to disable hardware offload, but the problem still exist.

it doesn’t matter if the tso gro or gso is enable or disabled in my case, the pppoe process eats up to 90% of a single core if i reach around 150Mbit/s. :frowning:

Thanks.

Could you please double check that you are using the high speed pppoe plugin?

The command

cat /etc/sysconfig/network-scripts/ifcfg-ppp0 | grep rp-pppoe

should return

PLUGIN='/usr/lib64/pppd/2.4.5/rp-pppoe.so'

Yep!, the PLUGIN='/usr/lib64/pppd/2.4.5/rp-pppoe.so string it’s in the config file.

Unfortunately I don’t have a PPPoE link to test (I never had one).
I’m sorry, I can’t help anymore.

1 Like

Don’t worry, in the next days i’ll try to install NS in physical machine, to check if the pppoe issue still persist.
Thanks, regards.

1 Like

Hello again, as promised :wink: i’ve installed NS baremetal on a APU2e4 PC Engine Machine, and it seems the issue of pppoe speed it’s more noticeable here:

RED Interface as DHCP (pppoe on the fritzbox router):

As you can see i can easily reach around 22MByte/s

RED interface as PPPoE:

as you can see i can barely hit 6MB/s, and the core skyrocket :slight_smile:

The tests were made on a clean installation, with only 1 client connected, since the pppoe process seems to eat a lot of cpu resources, i guess that the speed performance will be even worst in a real world scenario with a lot of services and clients connected :wink:

Cheers

Maybe the PPPoE process remain stuck into one core and it’s not so efficient in managing low frequencies?
This could explain the lowering performance into a smaller/less powerful device like the APU2e4.
Fritzbox AFAIK they use ARM SoC, so that’s a whole another story.
@filippo_carletti there’s any available way to switch for a multithread/core PPPoE Encapsulation process?

Thanks to @francio87 I made some tests, but when we switched ppp to sync mode the connection became unstable.
Unfortunately upstream seems not to care much about pppoe, I don’t have a connection to test and I prefer to use a dedicated pppoe router. :slight_smile:

1 Like