brasiliano
Posts: 3
Joined: Wed Jun 12, 2019 6:53 pm

IPsec performance issue with bidirectional traffic

Wed Jun 12, 2019 8:02 pm

Hello,

I have setup an IPsec tunnel between my RPI B3+ and an x86 box over the Ethernet port. The PI is running Raspbian (Linux 4.19.42-v7+) and the x86 has Ubuntu 18.04.

Code: Select all

[email protected]:~ $ uname -a 
Linux raspberrypi 4.19.42-v7+ #1219 SMP Tue May 14 21:20:58 BST 2019 armv7l GNU/Linux
[email protected]:~ $
The test scenario is as follows:
  1. I start an iperf3 client (TCP) on the PI. (The PI is the source node and the x86 is the sink node.)
    • The average throughput is around 103 Mbps.
  2. Then I start an iperf3 client (TCP) on the x86 node while the iperf started on the PI is still running. (The x86 is the source node and the PI is the sink node.)
    • The average throughput seen by the client running on the x86 is around 70 Mbps.
    • The average throughput seen by the client running on the PI drops from 103 Mbps to around 5 Mbps.
    • The iperf client running on the x86 reports TCP doing a quite large number of retransmissions.
  3. Then I stop the iperf client running on the PI.
    • The average throughput reported by the iperf client running on the x86 jumps from 70 Mbps to 76 Mbps.
    • The iperf client running on the x86 still reports TCP doing a quite large number of retransmissions.
As you can see the transmission capacity of the PI is severely affected when we have incoming traffic on the IPsec tunnel. Is this happening due to a bug on the Linux kernel or due to a misconfig on Raspbian?

BTW, the system runs with the CPUs IDLE 67% of the time when the two flows are running concurrently.

When I run both iperf flows outside of the IPsec tunnel (the traffic is not encrypted) then I have the following throughput:
  • The iperf client running on the PI transmits at an average rate of 137 Mbps.
  • The iperf client running on the x86 transmits at an average rate of 194 Mbps.
  • The system runs with the CPUs IDLE 86% of the time when the two flows are running concurrently.
Another interesting aspect of this is that TCP does a lot of retransmissions when I run an experiment with a single iperf flow (over the IPsec tunnel) having the PI as the sink node. Below is the output of the iperf3 client running on the x86 node.

Code: Select all

[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  10.0 MBytes  84.1 Mbits/sec    0    482 KBytes       
[  4]   1.00-2.01   sec  9.65 MBytes  80.4 Mbits/sec    0    941 KBytes       
[  4]   2.01-3.00   sec  9.37 MBytes  79.2 Mbits/sec    0   1.39 MBytes       
[  4]   3.00-4.00   sec  8.14 MBytes  68.3 Mbits/sec  1272    860 KBytes       
[  4]   4.00-5.00   sec  7.95 MBytes  66.7 Mbits/sec  915    907 KBytes       
[  4]   5.00-6.00   sec  8.02 MBytes  67.3 Mbits/sec  1388    329 KBytes       
[  4]   6.00-7.00   sec  8.60 MBytes  72.3 Mbits/sec  442   95.0 KBytes       
[  4]   7.00-8.00   sec  8.57 MBytes  71.9 Mbits/sec   95   76.3 KBytes       
[  4]   8.00-9.00   sec  8.40 MBytes  70.5 Mbits/sec   46    100 KBytes       
[  4]   9.00-10.00  sec  8.60 MBytes  71.9 Mbits/sec   80   95.0 KBytes       
[  4]  10.00-11.00  sec  8.48 MBytes  71.5 Mbits/sec   52    111 KBytes       
[  4]  11.00-12.00  sec  8.42 MBytes  70.6 Mbits/sec  101   91.0 KBytes       
[  4]  12.00-13.00  sec  8.50 MBytes  71.3 Mbits/sec   95   68.2 KBytes       
[  4]  13.00-14.00  sec  8.62 MBytes  72.3 Mbits/sec   67   76.3 KBytes       
[  4]  14.00-15.00  sec  8.54 MBytes  71.6 Mbits/sec   72   88.3 KBytes       
[  4]  15.00-16.00  sec  8.00 MBytes  67.1 Mbits/sec  113   57.5 KBytes       
[  4]  16.00-17.00  sec  8.68 MBytes  72.8 Mbits/sec   95   60.2 KBytes       
[  4]  17.00-18.00  sec  8.47 MBytes  71.2 Mbits/sec   51   87.0 KBytes       
[  4]  18.00-19.00  sec  8.40 MBytes  70.4 Mbits/sec  106   52.2 KBytes       
[  4]  19.00-20.00  sec  8.39 MBytes  70.4 Mbits/sec    0    120 KBytes       
[  4]  20.00-21.00  sec  8.30 MBytes  69.7 Mbits/sec  144   53.5 KBytes       
[  4]  21.00-22.00  sec  8.16 MBytes  68.4 Mbits/sec   48   54.9 KBytes       
[  4]  22.00-23.00  sec  8.55 MBytes  71.6 Mbits/sec   75   91.0 KBytes       
[  4]  23.00-24.00  sec  8.25 MBytes  69.3 Mbits/sec   50    110 KBytes       
[  4]  24.00-25.00  sec  8.65 MBytes  72.3 Mbits/sec  128   64.2 KBytes       
[  4]  25.00-26.00  sec  7.61 MBytes  63.9 Mbits/sec   74   69.6 KBytes       
[  4]  26.00-27.00  sec  8.50 MBytes  71.4 Mbits/sec   57   87.0 KBytes       
[  4]  27.00-28.00  sec  8.12 MBytes  68.1 Mbits/sec   87   73.6 KBytes       
[  4]  28.00-29.00  sec  8.52 MBytes  71.5 Mbits/sec    0    132 KBytes       
[  4]  29.00-30.00  sec  8.47 MBytes  71.1 Mbits/sec   99   76.3 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-30.00  sec   255 MBytes  71.3 Mbits/sec  5752             sender
[  4]   0.00-30.00  sec   253 MBytes  70.6 Mbits/sec                  receiver
Thanks for your help!
Brasiliano

epoch1970
Posts: 3333
Joined: Thu May 05, 2016 9:33 am
Location: Paris, France

Re: IPsec performance issue with bidirectional traffic

Thu Jun 13, 2019 6:40 am

3B+ has a gigabit link but its max speed is ~300mbps.
You need to setup flow control in the switch or the Pi in order to pace down faster peer hosts.
"S'il n'y a pas de solution, c'est qu'il n'y a pas de problème." Les Shadoks, J. Rouxel

brasiliano
Posts: 3
Joined: Wed Jun 12, 2019 6:53 pm

Re: IPsec performance issue with bidirectional traffic

Thu Jun 13, 2019 11:50 am

Hi,

The underlying cause of the problem does not seem related to flow control for the following reason.

When I have the PI sourcing the traffic over the IPsec tunnel then its throughput actually increases when the x86 starts transmitting in clear text (not encrypted).
  • The PI throughput increases from 103 Mbps to 113 Mbps.
  • The x86 throughput is around 213 Mbps.
So, it looks like issue arises when we have bidirectional traffic on the IPsec tunnel.

Thanks for your feedback!

Return to “Raspbian”