iperf3 is an open-source tool used to measure network bandwidth and analyze connection performance. With iperf3, we can perform real-time network speed tests between two devices.

Install iperf3

Installing iperf3 on the client and server.

Debian/Ubuntu

sudo apt install iperf3

CentOS/RHEL

sudo dnf install iperf3

Firewall Configuration

iperf3 uses port 5201. Ensure this port is open on the server so the client can connect.
If using ufw as the firewall, open port 5201 with the following command:

sudo ufw allow 5201/tcp
sudo ufw allow 5201/udp

If using iptables, add the following rule:

sudo iptables -A INPUT -p tcp --dport 5201 -j ACCEPT
sudo iptables -A INPUT -p udp --dport 5201 -j ACCEPT

If using firewalld, run the following command:

sudo firewall-cmd --add-port=5201/tcp --permanent
sudo firewall-cmd --add-port=5201/udp --permanent
sudo firewall-cmd --reload

Network Speed Testing

Once the installation and configuration are complete, you are ready to run the network speed test.

On the destination server, run the following command:

iperf3 -s

On the client, run the following command to test the speed to the server:

iperf3 -c 170.64.184.198

170.64.184.198 is the server’s IP address.

What is displayed on the server after the test is running:

-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 94.237.15.2, port 43620
[  5] local 170.64.184.198 port 5201 connected to 94.237.15.2 port 43626
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  10.8 MBytes  90.1 Mbits/sec                  
[  5]   1.00-2.00   sec  41.9 MBytes   351 Mbits/sec                  
[  5]   2.00-3.00   sec  43.4 MBytes   364 Mbits/sec                  
[  5]   3.00-4.00   sec  37.9 MBytes   317 Mbits/sec                  
[  5]   4.00-5.00   sec  43.0 MBytes   361 Mbits/sec                  
[  5]   5.00-6.00   sec  42.1 MBytes   353 Mbits/sec                  
[  5]   6.00-7.00   sec  38.6 MBytes   324 Mbits/sec                  
[  5]   7.00-8.00   sec  39.6 MBytes   332 Mbits/sec                  
[  5]   8.00-9.00   sec  43.1 MBytes   362 Mbits/sec                  
[  5]   9.00-10.00  sec  42.1 MBytes   353 Mbits/sec                  
[  5]  10.00-10.09  sec  4.00 MBytes   359 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.09  sec   386 MBytes   321 Mbits/sec                  receiver

What is displayed on the client after the test is running:

Connecting to host 170.64.184.198, port 5201
[  5] local 94.237.15.2 port 43626 connected to 170.64.184.198 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  14.8 MBytes   124 Mbits/sec    0   7.90 MBytes       
[  5]   1.00-2.00   sec  41.9 MBytes   351 Mbits/sec    0   8.10 MBytes       
[  5]   2.00-3.00   sec  43.2 MBytes   363 Mbits/sec    0   8.10 MBytes       
[  5]   3.00-4.00   sec  38.0 MBytes   319 Mbits/sec    0   8.10 MBytes       
[  5]   4.00-5.00   sec  42.9 MBytes   360 Mbits/sec    0   8.10 MBytes       
[  5]   5.00-6.00   sec  42.4 MBytes   355 Mbits/sec    0   8.10 MBytes       
[  5]   6.00-7.00   sec  39.1 MBytes   328 Mbits/sec    0   8.10 MBytes       
[  5]   7.00-8.00   sec  39.0 MBytes   327 Mbits/sec    0   8.10 MBytes       
[  5]   8.00-9.00   sec  43.0 MBytes   361 Mbits/sec    0   8.10 MBytes       
[  5]   9.00-10.00  sec  42.2 MBytes   354 Mbits/sec    0   8.10 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   386 MBytes   324 Mbits/sec    0             sender
[  5]   0.00-10.09  sec   386 MBytes   321 Mbits/sec                  receiver

iperf Done.

Test Result Analysis

The test results above show the network performance between the client (94.237.15.2) and the server (170.64.184.198) using the TCP protocol. Below is an analysis based on the provided output:

Summary of Results

  • Average Sender Throughput: 324 Mbits/sec
  • Average Receiver Throughput: 321 Mbits/sec
  • Total Data Transfer: 386 MBytes in 10 seconds
  • Retransmissions: 0 (no packets needed to be retransmitted)
  • Congestion Window (Cwnd): Stable around 8.10 MBytes

Explanation of Results

1. Throughput

  • Initial Throughput: In the first second, the throughput only reached 124 Mbits/sec. This typically happens because a new connection requires time to “ramp up” (TCP slow start).
  • Stable Throughput: After a few seconds, the throughput stabilized in the range of 320-360 Mbits/sec. This indicates that the network has sufficient bandwidth to support traffic at this level.
  • Overall Average: The average throughput over 10 seconds was 324 Mbits/sec, which is a fairly good value for an internet connection or local network.

2. Retransmissions

  • Number of Retransmissions: 0
    • There were no retransmissions at all, indicating that the connection is very stable and there was no packet loss. This is a good indicator that the network between the client and server is functioning optimally.

3. Congestion Window (Cwnd)

  • Cwnd Value: Stabilized around 8.10 MBytes after the initial seconds.
    • This value shows that the TCP congestion control algorithm (e.g., Cubic or Reno) successfully managed data flow without causing bottlenecks.

4. Difference Between Sender and Receiver

  • Sender: 324 Mbits/sec
  • Receiver: 321 Mbits/sec
    • There is a slight difference between sender and receiver throughput (3 Mbits/sec). This is normal due to TCP protocol overhead, latency, or buffering on the receiver side.

Interpretation of Results

Network Performance

  • Limited Bandwidth: With a maximum throughput of around 324 Mbits/sec, these results indicate that your network or internet connection’s bandwidth is capped at this level. Most likely, this is the limit imposed by your ISP (Internet Service Provider) or network hardware (e.g., router or switch).
  • Connection Stability: No retransmissions and low jitter (not explicitly shown) indicate low latency and a very stable connection.
  • Scaling Capability: The increase in throughput from 124 Mbits/sec to 324 Mbits/sec shows that the network can adapt to increasing traffic loads.

Recommendations

  1. Check ISP Limitations
    • If you are using an internet connection, contact your ISP to confirm whether the bandwidth you are receiving matches the purchased package.
    • If bandwidth is limited by the ISP, consider upgrading your service package.
  2. Optimize Hardware
    • Ensure that routers, switches, and NICs support high throughput.
    • Use high-quality Ethernet cables (e.g., Cat6 or Cat6a) if using a wired network.
  3. Test with UDP
    • To check if bandwidth can be further increased, try running a test with the UDP protocol:
      iperf3 -c 170.64.184.198 -u -b 500M
      
    • This will help identify if the bottleneck is due to TCP flow control.
  4. Use Parallel Streams
    • If bandwidth is still limited, try using parallel streams to maximize utilization:
      iperf3 -c 170.64.184.198 -P 4
      
  5. Monitor Latency
    • Use ping to monitor latency to the server:
      ping -c 10 170.64.184.198
      
    • High latency could be a cause of lower throughput.

Performance Conclusion

  • Good for General Applications: A bandwidth of 324 Mbits/sec is sufficient for general applications such as browsing, HD video streaming, and transferring small to medium files.
  • Not Ideal for Intensive Applications: For intensive applications like 4K/8K streaming, large data backups, or high-throughput real-time communication, this bandwidth may be inadequate.

Conclusion

The test results show that the tested network has stable performance with an average throughput of 324 Mbits/sec. There were no retransmissions, indicating a highly reliable connection. However, this bandwidth may be limited by the ISP or network hardware.

For general applications, this performance is quite satisfactory. However, if higher bandwidth is required, consider optimizing hardware, upgrading the ISP package, or using techniques like parallel streams.