Page 1 of 1

RPi4 - serious (UDP) network issues

Posted: Wed Feb 05, 2020 2:03 pm
by soundcheck
Hi.

I'm running a RPi4, with latest 4.19.97-v8+ kernel
OS is Rasbian Buster.

While running iperf3 UDP tests I encountered following situation:

Code: Select all

RPi4: 
iperf3 -s -V

NUC:
iperf3 -c 192.xxx.x.xxx -V -i 1 -b 0 -u -t 60


Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-60.00  sec  6.62 GBytes   948 Mbits/sec  0.000 ms  0/4908290 (0%)  sender
[  5]   0.00-60.00  sec  3.43 GBytes   491 Mbits/sec  0.012 ms  2362789/4908290 (48%)  receiver
CPU Utilization: local/sender 47.2% (8.3%u/38.9%s), remote/receiver 83.0% (7.2%u/75.9%s)

As you can see the "Lost Datagrams" number is awful high.

I did a lof testing - port swap, cable swap, PS swap, kernel swap (32/64), OS swap, NIC swap (external USB3-Gbit-NIC) and more.
Nothing changed the situation (very much).
One learning though: The external USB NIC shows twice the errorfree bandwidth (400 vs 200 MBit/s)
in comparison to the internal NIC.

Now. To make a long and frustrating story short, today I managed to get a huge step closer in surrounding the issue.

I simply started iperf3 with a specific ""Affinity"" of 2.

Code: Select all

RPi4: 
iperf3 -s -V -A 2
And here is the result:

Code: Select all

Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-60.00  sec  6.63 GBytes   949 Mbits/sec  0.000 ms  0/4914870 (0%)  sender
[  5]   0.00-60.00  sec  6.62 GBytes   948 Mbits/sec  0.013 ms  2409/4914870 (0.049%)  receiver
CPU Utilization: local/sender 50.8% (8.6%u/42.2%s), remote/receiver 80.5% (16.9%u/63.6%s)
:shock: That looks almost perfect now.

Still not absolute happy, I wondered why I loose datagrams right in the beginning.

Here comes the solution (or better the workaround):

Subject: CPU governors

I was running the "ondemand" governor @600MHz idle.
I figured out I could get rid of lost datagrams altogether by switching to the performance governor.
Wow. That's something. That also means the whole acceleration process seems to cause datagram loss.

To make the main workaround persistent I isolated CPU0 in cmdline.txt (isolcpus=0).

OK. I have a workaround for now. And lost 1 CPU. ;)

After all there seems to be a serious issue on network performance if an IRQ shares a CPU
with a user process or if there are load changes.

Isolating such an issue pretty much goes above my head. Therefore any ideas or hints are welcome?

I also experienced some kind of annoying micro stalls while typing on the RPi4, while being logged in via ssh.
That got much better now with the workaround in place.
Just to mention it: That micro-stall issue was pretty much gone while using the external NIC with a Realtek chip inside.

Another one:
While doing the investigation I stepped over another potential issue in the early phase.
I forced the internal network IF to 100MBit/s to get around the losses.

Code: Select all

ethtool -s eth0 speed 100 duplex full
That looked good on the "Lost datagrams" side. 0 losses.

However, with the NIC now at 100MBit, I found - by looking at the dmesg output - that the eth0 interface was restarting every 3-5 seconds.
I doubt that this is a normal behavior. I think that also needs a more in-depth analysis.

To wrap it up.
Something doesn't seem to be properly working in my setup OR inside the RPi4 when it comes to ethernet based networking.
Since I've been using RPi4 internal Wifi only from day 1, I didn't step on that issue earlier.

Once more, any further ideas are welcome. (Not that I havn't been looking around for ideas for quite some time. ;) ) I do think
a designer should have a look at it. If it turns out to be a real issue I'll issue a trouble ticket over @github. Just let me know.

SC

Re: RPi4 - serious (UDP) network issues

Posted: Wed Feb 05, 2020 3:39 pm
by knute
Don't know if this will help you but we found that increasing the receive buffer size significantly reduced the number of lost UDP packets.

Re: RPi4 - serious (UDP) network issues

Posted: Wed Feb 05, 2020 4:52 pm
by soundcheck
IMO increasing buffers - if that'd help at all - is just another workaround to cope with a high inefficiency or a flaw (>>bufferbloat!).
My NUC runs at the same buffersize "net.core.rmem_max = 212992" btw and is not loosing any data.

I do suspect an inefficiency in the IRQ/softirq IRQ management arena. That NIC is still rather new. The whole IRQ handling changed.
Perhaps somebody should look into the "interrupt moderation" /"Coalesce parameters" area. That's usually a tricky area causing
hickups here and there. But again. I'm not an expert in that area.



Anyhow. Which buffers did you change to what size?

Re: RPi4 - serious (UDP) network issues

Posted: Thu Feb 06, 2020 1:57 am
by knute
soundcheck wrote:
Wed Feb 05, 2020 4:52 pm
Anyhow. Which buffers did you change to what size?
We've got about 200 NUCs running Xubuntu 18.04. We send them a lot of UDP datagrams. For those we created a file 80-sysctl.conf in /etc/sysctl.d. In that we put:

net.core.rmem_max=4194304

This improved the lost packet situation for us.

I've seen folks suggest that you should change net.core.rmem_default as well. If I run sysctl -a | grep rmem on my Pi 3 I get:

pi@raspberrypi3B-1:/etc/sysctl.d $ sudo sysctl -a | grep rmem
net.core.rmem_default = 163840
net.core.rmem_max = 163840
net.ipv4.tcp_rmem = 4096 131072 6291456
net.ipv4.udp_rmem_min = 4096

A 163840 byte buffer is pretty small if you are sending a lot of data. There is always some speed of delivery that will overrun the receive buffer and there is also a limit to how fast you can remove data from the buffer.

I don't know how fast the NIC will run on a Pi 4 but I don't think it is anywhere near the 1G speed. If you are sending UDP packets at that rate a lot of them are going in the bit bucket.

Re: RPi4 - serious (UDP) network issues

Posted: Thu Feb 06, 2020 6:28 am
by ejolson
knute wrote:
Thu Feb 06, 2020 1:57 am
I don't know how fast the NIC will run on a Pi 4 but I don't think it is anywhere near the 1G speed. If you are sending UDP packets at that rate a lot of them are going in the bit bucket.
My impression is that gigabit Ethernet on the Pi 4B runs at gigabit wire speed and ping latency is about the same as a typical PC. My tests with iperf3 are at

https://www.raspberrypi.org/forums/view ... 9#p1533989

I didn't think to try iperf3 in UDP mode and will update the thread when I do.

Re: RPi4 - serious (UDP) network issues

Posted: Thu Feb 06, 2020 7:24 am
by soundcheck
knute wrote:
Thu Feb 06, 2020 1:57 am
I don't know how fast the NIC will run on a Pi 4 but I don't think it is anywhere near the 1G speed. If you are sending UDP packets at that rate a lot of them are going in the bit bucket.
No need for speculations. There's proof that it'll work under certain conditions:

As I said. Isolating CPU0 and running the performance governor on my RPi4 gets me full Gbit UDP speed.
And that with default buffer sizes!

Code: Select all

iperf3 -s -V -A 2
iperf3 -c 192.xxx.x.xxx -V -i 1 -b 0 -u -t 60

Code: Select all

Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-60.00  sec  6.63 GBytes   949 Mbits/sec  0.000 ms  0/4915150 (0%)  sender
[  5]   0.00-60.00  sec  6.63 GBytes   949 Mbits/sec  0.013 ms  0/4915150 (0%)  receiver
CPU Utilization: local/sender 50.5% (8.8%u/41.7%s), remote/receiver 58.7% (13.0%u/45.7%s)


@ejolson

Just running the iperf3 TCP test, won't tell you much about the link (HW and SW) quality. Just the fact that you can achieve
maximum TCP bitrates doesn't mean you achieve it in the most efficient way. (And all of us running PIs know that efficiency is key ;) )
Due to the fragile nature of UDP, UDP tests will tell you much more about the state of your system/network.
And if finally UDP is performing well, there'll IMO be no need to run TCP tests anymore. :)

Re: RPi4 - serious (UDP) network issues

Posted: Fri Feb 07, 2020 7:25 am
by soundcheck
Hi.

Another thought/question.

Gbit networking is a pretty intense task. It causes a huge number of IRQs in the environment.

The RPI 4B comes with a new interrupt controller. The GIC-400.
That one is supposed to handle and issue the interrupts to offload to CPUs from the task - as far as I understood.

What you can see on the system is this:

Code: Select all

cat /proc/interrupts 
           CPU0       CPU1       CPU2       CPU3       
  3:      87326      28733      38835      25778     GICv2  30 Level     arch_timer
  6:          0          0          0          0     GICv2 112 Level     bcm2708_fb DMA
 16:       1630          0          0          0     GICv2  65 Level     fe00b880.mailbox
 20:          0          0          0          0     GICv2 169 Level     brcmstb_thermal
 21:      22863          0          0          0     GICv2 158 Level     mmc0
 22:          0          0          0          0     GICv2  48 Level     arm-pmu
 23:          0          0          0          0     GICv2  49 Level     arm-pmu
 24:          0          0          0          0     GICv2  50 Level     arm-pmu
 25:          0          0          0          0     GICv2  51 Level     arm-pmu
 27:      79256          0          0          0     GICv2 189 Level     eth0
 28:      62710          0          0          0     GICv2 190 Level     eth0
 34:         29          0          0          0     GICv2  66 Level     VCHIQ doorbell
 36:     154324          0          0          0  Brcm_MSI 524288 Edge      xhci_hcd
IPI0:      3667      18733      40736       8310       Rescheduling interrupts
IPI1:       224       6952      19268      73173       Function call interrupts

Even with the GIC-400 in place the interrupts are all shown on CPU0. In December I asked a related question in this forum over here: https://www.raspberrypi.org/forums/view ... p?t=259208.

I was told that just the IRQ-handlers would still run on "CPU" ( - which one? ) --- which of course doesn't explain why the actual number of interrupts are still shown on CPU0 in above printout!?!?

Let's assume it's like it was said in December. And above listing is simply false/misleading. That just the IRQ-handlers run on a/the "CPU" (s).
Wouldn't it be logical to conclude that the IRQ-handlers are not up to the GBit task ? That I experience an IRQ-handler related performance issue?
The fact that isolating CPU0, the rather obvious home of the network related IRQ-handler, basically keeping additional userspace load off that IRQ-handler, mitigates the network issue, would at least somewhat underline this vague idea.

Any RPI developers around? ;)

Re: RPi4 - serious (UDP) network issues

Posted: Fri Feb 21, 2020 10:36 am
by soundcheck
Hmmh. It seems the nobody of the foundation is interested. :?: :?

FYI.

Another development.

I ran some more tests. I am now running 2 RPi4s. One as server one as client.
CPU0 is isolated on both.

It turns out that there's another rather obvious factor to keep in mind.

The CPU that's running the benchmark needs at least a clock speed of 1000MHz to be able to cope with (udp-) Gbit ethernet.
Going below that won't achieve Gbit levels and also cause once more severe UDP datagram loss.

E.g running the ondemand governor ( which goes down to 600MHz) will cause a severe UDP datagram loss until the governor
gets the CPU above 1000MHz.

Enjoy.

Re: RPi4 - serious (UDP) network issues

Posted: Fri Feb 21, 2020 3:36 pm
by ejolson
soundcheck wrote:
Fri Feb 21, 2020 10:36 am
Hmmh. It seems the nobody of the foundation is interested. :?: :?

FYI.

Another development.

I ran some more tests. I am now running 2 RPi4s. One as server one as client.
CPU0 is isolated on both.

It turns out that there's another rather obvious factor to keep in mind.

The CPU that's running the benchmark needs at least a clock speed of 1000MHz to be able to cope with (udp-) Gbit ethernet.
Going below that won't achieve Gbit levels and also cause once more severe UDP datagram loss.

E.g running the ondemand governor ( which goes down to 600MHz) will cause a severe UDP datagram loss until the governor
gets the CPU above 1000MHz.

Enjoy.
It seems like you have developed a good workaround for the packet loss. Would it be possible to post details of the minimal configuration needed to
  • Isolate CPU0 so other tasks don't run on it.
  • Set the governor so 1000 MHz is the minimum.
I think this would be useful for people and myself setting up systems and file servers which rely on UDP.

Re: RPi4 - serious (UDP) network issues

Posted: Fri Feb 21, 2020 5:30 pm
by soundcheck
My current way of getting around the subject

1. I add

Code: Select all

isolcpus=0
to /boot/cmdline.txt to avoid any interference with the NIC.

2. I simply nailed the CPUs to max performance, adding

Code: Select all

##cpu clock
arm_freq=1500
##disable CPU throttling
force_turbo=1
to /boot/config.txt.

This way the governor does not have any impact anymore.
Of course you could also set the governor to "performance", I do prefer to do it the "RPi" way - as shown above - though.

And again. Before doing anything, run the iperf3 tests yourself. I havn't seen anybody confirming my findings.
Therefore I do see that there's still a chance that e.g. the/my router ( the only part I havn't changed) could play a role in that story.

Re: RPi4 - serious (UDP) network issues

Posted: Fri Feb 21, 2020 5:39 pm
by ejolson
soundcheck wrote:
Fri Feb 21, 2020 5:30 pm
My current way of getting around the subject

1. I add

Code: Select all

isolcpus=0
to /boot/cmdline.txt to avoid any interference with the NIC.

2. I simply nailed the CPUs to max performance, adding

Code: Select all

##cpu clock
arm_freq=1500
##disable CPU throttling
force_turbo=1
to /boot/config.txt.

This way the governor does not have any impact anymore.
Of course you could also set the governor to "performance", I do prefer to do it the "RPi" way - as shown above - though.

And again. Before doing anything, run the iperf3 tests yourself. I havn't seen anybody confirming my findings.
Therefore I do see that there's still a chance that e.g. the/my router ( the only part I havn't changed) could play a role in that story.
Thanks! The thread starting at

viewtopic.php?f=28&t=253337#p1610772

seems to confirm your findings. I've been planning to check here and glad to have a working configuration to test.

Re: RPi4 - serious (UDP) network issues

Posted: Sat Feb 22, 2020 9:49 am
by soundcheck
I also read that thread. And I also read the related trouble ticket at the rpi kernel git.

These folks were very quick on blaming the router. To me it seemed that the assigned
designer over at github was quite happy to have a nice hook for jumping on that train
and quickly close the ticket.

To me it seemed that it never occurred to them that the "brandnew" RPi4 NIC might have a compatibility (firmware/driver) issue !?!?
Of course - beside a potential NIC issue - there could be more sources - e.g. the brandnew interrupt controller and interrupt management.
The whole PCI infrastructure is new....

However. I'm not that deep into the subject to be able to jump to final conclusions here. I think, once the issue is confirmed, it would take some serious debugging to isolate the issue. In case it'd be a RPi4 HW related issue - oh dear. ;)

I think it's important that some of you folks run that simple iperf3 test on different networks.
In my opinion everybody should run these tests anyhow to see how the network setup performs.

If some of you folks confirm the behavior I'll issue a ticket over at github.

THX

Re: RPi4 - serious (UDP) network issues

Posted: Thu Mar 05, 2020 6:24 pm
by ejolson
ejolson wrote:
Fri Feb 21, 2020 5:39 pm
soundcheck wrote:
Fri Feb 21, 2020 5:30 pm
My current way of getting around the subject

1. I add

Code: Select all

isolcpus=0
to /boot/cmdline.txt to avoid any interference with the NIC.

2. I simply nailed the CPUs to max performance, adding

Code: Select all

##cpu clock
arm_freq=1500
##disable CPU throttling
force_turbo=1
to /boot/config.txt.

This way the governor does not have any impact anymore.
Of course you could also set the governor to "performance", I do prefer to do it the "RPi" way - as shown above - though.

And again. Before doing anything, run the iperf3 tests yourself. I havn't seen anybody confirming my findings.
Therefore I do see that there's still a chance that e.g. the/my router ( the only part I havn't changed) could play a role in that story.
Thanks! The thread starting at

viewtopic.php?f=28&t=253337#p1610772

seems to confirm your findings. I've been planning to check here and glad to have a working configuration to test.
I have made a test using iperf3 and find about 100,000 UDP packets are dropped at the beginning of the test, apparently before the CPU governor has spun up. More details are at

viewtopic.php?f=91&t=266723#p1622107

Re: RPi4 - serious (UDP) network issues

Posted: Sun May 24, 2020 10:07 am
by sukhusukhu
Hello!

I just set-up OMV on my RPi4 4 GB. I have a 2TB WD Red drive (ext4) in a powered enclosure connected to a USB 3.0 port on the Pi. I am sharing over NFS on my Mac. However, I am getting no more than 9Mbits/sec of read/write over ethernet (not wifi)

I tried your method to isolate cpu0 like this:

Code: Select all

isolcpus=0 console=serial0,115200 console=tty1 root=PARTUUID=738a4d67-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

Code: Select all

[all]
#dtoverlay=vc4-fkms-v3d
##cpu clock
arm_freq=1500
##disable CPU throttling

However, I still have the same problem. Running iperf3 (with affinity 2), I still get this:

Code: Select all

Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-20.00  sec  1.34 GBytes   574 Mbits/sec  0.000 ms  0/990358 (0%)  sender
[  5]   0.00-20.26  sec   228 MBytes  94.4 Mbits/sec  0.113 ms  825293/990358 (83%)  receiver
CPU Utilization: local/sender 45.4% (4.0%u/41.4%s), remote/receiver 11.7% (0.7%u/11.0%s)

Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5] (sender statistics not available)
[  5]   0.00-20.26  sec   228 MBytes  94.4 Mbits/sec  0.113 ms  825293/990358 (83%)  receiver
CPU Utilization: local/receiver 11.7% (0.7%u/11.0%s), remote/sender 0.0% (0.0%u/0.0%s)
iperf 3.6
Linux NAS 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020 armv7l
Any help super appreciated!

Re: RPi4 - serious (UDP) network issues

Posted: Sun May 24, 2020 7:38 pm
by ejolson
sukhusukhu wrote:
Sun May 24, 2020 10:07 am
Hello!

I just set-up OMV on my RPi4 4 GB. I have a 2TB WD Red drive (ext4) in a powered enclosure connected to a USB 3.0 port on the Pi. I am sharing over NFS on my Mac. However, I am getting no more than 9Mbits/sec of read/write over ethernet (not wifi)

I tried your method to isolate cpu0 like this:

Code: Select all

isolcpus=0 console=serial0,115200 console=tty1 root=PARTUUID=738a4d67-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

Code: Select all

[all]
#dtoverlay=vc4-fkms-v3d
##cpu clock
arm_freq=1500
##disable CPU throttling

However, I still have the same problem. Running iperf3 (with affinity 2), I still get this:

Code: Select all

Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-20.00  sec  1.34 GBytes   574 Mbits/sec  0.000 ms  0/990358 (0%)  sender
[  5]   0.00-20.26  sec   228 MBytes  94.4 Mbits/sec  0.113 ms  825293/990358 (83%)  receiver
CPU Utilization: local/sender 45.4% (4.0%u/41.4%s), remote/receiver 11.7% (0.7%u/11.0%s)

Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5] (sender statistics not available)
[  5]   0.00-20.26  sec   228 MBytes  94.4 Mbits/sec  0.113 ms  825293/990358 (83%)  receiver
CPU Utilization: local/receiver 11.7% (0.7%u/11.0%s), remote/sender 0.0% (0.0%u/0.0%s)
iperf 3.6
Linux NAS 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020 armv7l
Any help super appreciated!
Those results are so bad that I suspect a network cabling problem. The issue could also be your switch, even if it appears to be compatible with other computers.

viewtopic.php?t=249861

Re: RPi4 - serious (UDP) network issues

Posted: Mon May 25, 2020 5:45 am
by soundcheck
I'd agree with ejolson.

Re: RPi4 - serious (UDP) network issues

Posted: Mon May 25, 2020 7:52 am
by sukhusukhu
Is there a way I can confirm where the problem is? (sorry still fairly new at this)
I just bought these cables and they are all CAT6 from Amazon. My router is a Linksys - AC1200 Dual-Band Wi-Fi Router - Blue.
I have no other devices connected to the router except the Pi4 and my Mac.

UPDATE:
I restarted my Mac after mounting the NFS and I am getting a very decent 100Mbps R/W now. I saw this on another thread on this forum that restarting your Mac after mounting solves the speed problem and it worked surprisingly. I am uploading to NAS right now at 15-20MB/sec over WiFi, so very decent.

However, iperf3 is still showing 85% datagram loss. I think there might be something wrong with the iperf3 utility based on a very old thread here:
https://github.com/esnet/iperf/issues/296

Let me know if you have any insights!

Re: RPi4 - serious (UDP) network issues

Posted: Tue May 26, 2020 12:24 pm
by soundcheck
Look. The default NFS protocol is TCP. TCP usually won't loose data. You for sure hit the wrong thread
with your problems.

That's not what this thread is all about. We're also not talking about 100MBit/s.

We're simply talking about the GBit UDP capabilities and related flaws/weak spots of the RPI4.

Good luck to you.

Re: RPi4 - serious (UDP) network issues

Posted: Thu Jun 04, 2020 3:21 pm
by rodrigouroz
It seems I'm not alone.
I've just set up a brand new RPI4 with DietPi to be used as a media server (qBittorrent, Plex, etc) and I was crazy about NetData sending me notifications about high UDP receive buffer errors.
I started the investigation and stumbled upon this thread.
Here are my results:

Code: Select all

root@mediaserver:/etc/netdata/health.d# iperf3 -s -V -A 2
iperf 3.6
Linux mediaserver 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Time: Thu, 04 Jun 2020 14:36:00 GMT
Accepted connection from 192.168.1.9, port 61437
      Cookie: 4lloemggcskio2hwtkdniobpsdliwweycxpo
[  5] local 192.168.1.3 port 5201 connected to 192.168.1.9 port 51800
Starting Test: protocol: UDP, 1 streams, 1448 byte blocks, omitting 0 seconds, 60 second test, tos 0
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  47.5 MBytes   399 Mbits/sec  0.016 ms  165574/199994 (83%)
[  5]   1.00-2.00   sec  49.2 MBytes   413 Mbits/sec  0.091 ms  167619/203247 (82%)
[  5]   2.00-3.00   sec  45.1 MBytes   378 Mbits/sec  0.051 ms  174951/207598 (84%)
[  5]   3.00-4.00   sec  46.7 MBytes   392 Mbits/sec  0.008 ms  178014/211813 (84%)
[  5]   4.00-5.00   sec  41.6 MBytes   349 Mbits/sec  0.081 ms  131703/161863 (81%)
[  5]   5.00-6.00   sec  41.6 MBytes   349 Mbits/sec  0.033 ms  160107/190198 (84%)
[  5]   6.00-7.00   sec  39.9 MBytes   335 Mbits/sec  0.029 ms  162319/191222 (85%)
[  5]   7.00-8.00   sec  38.0 MBytes   319 Mbits/sec  0.010 ms  182178/209712 (87%)
[  5]   8.00-9.00   sec  44.0 MBytes   369 Mbits/sec  0.036 ms  190598/222439 (86%)
[  5]   9.00-10.00  sec  46.4 MBytes   390 Mbits/sec  0.205 ms  140717/174344 (81%)
[  5]  10.00-11.00  sec  38.8 MBytes   325 Mbits/sec  0.137 ms  182387/210458 (87%)
[  5]  11.00-12.00  sec  38.7 MBytes   325 Mbits/sec  0.019 ms  175824/203859 (86%)
[  5]  12.00-13.00  sec  41.4 MBytes   347 Mbits/sec  0.401 ms  160960/190950 (84%)
[  5]  13.00-14.00  sec  34.3 MBytes   288 Mbits/sec  0.033 ms  173143/197986 (87%)
[  5]  14.00-15.00  sec  34.0 MBytes   285 Mbits/sec  0.031 ms  168230/192827 (87%)
[  5]  15.00-16.00  sec  33.7 MBytes   283 Mbits/sec  0.225 ms  180840/205278 (88%)
[  5]  16.00-17.00  sec  33.6 MBytes   282 Mbits/sec  0.028 ms  173650/197994 (88%)
[  5]  17.00-18.00  sec  31.7 MBytes   266 Mbits/sec  0.017 ms  172153/195107 (88%)
[  5]  18.00-19.00  sec  34.0 MBytes   285 Mbits/sec  0.033 ms  178091/202693 (88%)
[  5]  19.00-20.00  sec  33.6 MBytes   282 Mbits/sec  0.096 ms  116568/140880 (83%)
[  5]  20.00-21.00  sec  34.1 MBytes   286 Mbits/sec  0.017 ms  126862/151564 (84%)
[  5]  21.00-22.00  sec  32.7 MBytes   275 Mbits/sec  0.068 ms  147332/171046 (86%)
[  5]  22.00-23.00  sec  33.3 MBytes   279 Mbits/sec  0.038 ms  170886/195011 (88%)
[  5]  23.00-24.00  sec  32.2 MBytes   270 Mbits/sec  0.030 ms  173170/196506 (88%)
[  5]  24.00-25.00  sec  31.7 MBytes   266 Mbits/sec  0.030 ms  179055/201992 (89%)
[  5]  25.00-26.00  sec  30.2 MBytes   253 Mbits/sec  0.012 ms  153011/174891 (87%)
[  5]  26.00-27.00  sec  31.0 MBytes   260 Mbits/sec  0.031 ms  184823/207298 (89%)
[  5]  27.00-28.00  sec  29.6 MBytes   249 Mbits/sec  0.039 ms  184665/206126 (90%)
[  5]  28.00-29.00  sec  31.6 MBytes   265 Mbits/sec  0.044 ms  182062/204920 (89%)
[  5]  29.00-30.00  sec  33.0 MBytes   277 Mbits/sec  0.034 ms  172514/196389 (88%)
[  5]  30.00-31.00  sec  33.8 MBytes   283 Mbits/sec  0.173 ms  177997/202468 (88%)
[  5]  31.00-32.00  sec  33.1 MBytes   277 Mbits/sec  0.035 ms  178213/202164 (88%)
[  5]  32.00-33.00  sec  32.2 MBytes   270 Mbits/sec  0.160 ms  178906/202215 (88%)
[  5]  33.00-34.00  sec  33.2 MBytes   278 Mbits/sec  0.027 ms  132557/156569 (85%)
[  5]  34.00-35.00  sec  33.9 MBytes   284 Mbits/sec  0.069 ms  128378/152912 (84%)
[  5]  35.00-36.00  sec  34.2 MBytes   287 Mbits/sec  0.030 ms  94734/119525 (79%)
[  5]  36.00-37.00  sec  32.4 MBytes   272 Mbits/sec  0.018 ms  162394/185870 (87%)
[  5]  37.00-38.00  sec  34.4 MBytes   289 Mbits/sec  0.047 ms  153830/178769 (86%)
[  5]  38.00-39.00  sec  33.3 MBytes   279 Mbits/sec  0.031 ms  175650/199773 (88%)
[  5]  39.00-40.00  sec  34.3 MBytes   287 Mbits/sec  0.025 ms  95788/120601 (79%)
[  5]  40.00-41.00  sec  32.8 MBytes   276 Mbits/sec  0.056 ms  103223/127009 (81%)
[  5]  41.00-42.00  sec  34.1 MBytes   286 Mbits/sec  0.029 ms  59256/83934 (71%)
[  5]  42.00-43.00  sec  32.8 MBytes   275 Mbits/sec  0.038 ms  89864/113642 (79%)
[  5]  43.00-44.00  sec  34.3 MBytes   288 Mbits/sec  0.031 ms  91731/116583 (79%)
[  5]  44.00-45.00  sec  34.0 MBytes   285 Mbits/sec  0.028 ms  107417/132059 (81%)
[  5]  45.00-46.00  sec  32.8 MBytes   275 Mbits/sec  0.054 ms  77343/101079 (77%)
[  5]  46.00-47.00  sec  34.3 MBytes   288 Mbits/sec  0.095 ms  111883/136718 (82%)
[  5]  47.00-48.00  sec  31.9 MBytes   267 Mbits/sec  0.345 ms  138874/161949 (86%)
[  5]  48.00-49.00  sec  26.0 MBytes   219 Mbits/sec  0.038 ms  174392/193256 (90%)
[  5]  49.00-50.00  sec  28.7 MBytes   241 Mbits/sec  0.031 ms  147717/168492 (88%)
[  5]  50.00-51.00  sec  34.8 MBytes   291 Mbits/sec  0.325 ms  139960/165144 (85%)
[  5]  51.00-52.00  sec  34.2 MBytes   288 Mbits/sec  0.082 ms  165559/190327 (87%)
[  5]  52.00-53.01  sec  33.8 MBytes   281 Mbits/sec  0.257 ms  167404/191866 (87%)
[  5]  53.01-54.00  sec  32.9 MBytes   278 Mbits/sec  0.174 ms  163844/187687 (87%)
[  5]  54.00-55.00  sec  34.5 MBytes   289 Mbits/sec  0.048 ms  145127/170085 (85%)
[  5]  55.00-56.00  sec  32.4 MBytes   272 Mbits/sec  0.248 ms  151940/175385 (87%)
[  5]  56.00-57.00  sec  34.5 MBytes   290 Mbits/sec  0.043 ms  174498/199513 (87%)
[  5]  57.00-58.00  sec  33.9 MBytes   284 Mbits/sec  0.061 ms  147416/171929 (86%)
[  5]  58.00-59.00  sec  34.0 MBytes   285 Mbits/sec  0.023 ms  170167/194803 (87%)
[  5]  59.00-60.00  sec  34.2 MBytes   287 Mbits/sec  0.009 ms  155113/179892 (86%)
[  5]  60.00-60.09  sec  3.69 MBytes   346 Mbits/sec  0.006 ms  249/2918 (8.5%)
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5] (sender statistics not available)
[  5]   0.00-60.09  sec  2.06 GBytes   295 Mbits/sec  0.006 ms  9171430/10701341 (86%)  receiver
CPU Utilization: local/receiver 27.9% (2.8%u/25.1%s), remote/sender 0.0% (0.0%u/0.0%s)
iperf 3.6
Linux mediaserver 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l
And that's after setting the CPU governor to performance and the receive buffer to

Code: Select all

root@mediaserver:/etc/netdata/health.d# sysctl net.core | grep mem
net.core.optmem_max = 10240
net.core.rmem_default = 65536
net.core.rmem_max = 26214400
net.core.wmem_default = 65536
net.core.wmem_max = 26214400
I haven't isolated the CPU though as I'm wondering if this is something I just can live with.

Re: RPi4 - serious (UDP) network issues

Posted: Thu Jun 04, 2020 3:27 pm
by rodrigouroz
Ok, I've found something, and even though I haven't finalized trying to understand what's going on I thought it was relevant enough to be shared.
I have qBittorrent running with a capped speed. When the speed is capped I see the errors mentioned in my previous comment.
If I remove the cap and let it do whatever it wants with the speed, the errors go away. Nothing, zero.

I'm perplexed but it definitely is an issue with the software. I'm going to try with a different torrent client and see what happens.

Re: RPi4 - serious (UDP) network issues

Posted: Thu Jun 04, 2020 6:52 pm
by knute
rodrigouroz wrote:
Thu Jun 04, 2020 3:27 pm
Ok, I've found something, and even though I haven't finalized trying to understand what's going on I thought it was relevant enough to be shared.
I have qBittorrent running with a capped speed. When the speed is capped I see the errors mentioned in my previous comment.
If I remove the cap and let it do whatever it wants with the speed, the errors go away. Nothing, zero.

I'm perplexed but it definitely is an issue with the software. I'm going to try with a different torrent client and see what happens.
I'm curious if qBittorrent is just throwing away packets that exceed the throttle you have set? Is there a handshake between the qBittorrent and whatever it is connected to?