Setting up a 10Gb network card under Linux when working with ethernet frames


I'm not sure if the question is in line with the rules, but still.

There is a 10Gb card (the brand is not important, because the result was repeated on two others different from this one) and Debian Buster directly to the data source machine by optics. A ~9Gb data stream was organized, consisting of ~9000 byte ethernet frames, in which there is nothing except the sender's MAC, the recipient's MAC, the custom EtherType and the packet counter.

Measurements by the wireshark , tshark , tcpdump utilities (although they all work through the same library anyway) show that about every second packet is missed. The self-written curve code shows about the same. At the same time, under Windows 7 , the same wireshark (with a large allocated buffer) catches all packets.

Checking the network bandwidth with the iperf3 utility on TCP traffic gives stable 9.8Gb.

# uname -a

Linux debian 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2 (2019-08-28) x86_64 GNU/Linux

# ifconfig enp1s0

enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 12000
        inet  netmask  broadcast
        inet6 fe80::b2c5:54ff:feff:f37f  prefixlen 64  scopeid 0x20<link>
        ether b0:c5:54:ff:f3:7f  txqueuelen 3000  (Ethernet)
        RX packets 1501482032  bytes 13216245041216 (12.0 TiB)
        RX errors 0  dropped 1501472982  overruns 0  frame 0
        TX packets 75  bytes 7809 (7.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 34  memory 0x90400000-90410000  

# ethtool -g enp1s0

Ring parameters for enp1s0:
Pre-set maximums:
RX:     1365
RX Mini:    0
RX Jumbo:   0
TX:     2048
Current hardware settings:
RX:     1365
RX Mini:    0
RX Jumbo:   0
TX:     2048

# ethtool -k enp1s0

Features for enp1s0:
rx-checksumming: on
tx-checksumming: on
    tx-checksum-ipv4: on
    tx-checksum-ip-generic: off [fixed]
    tx-checksum-ipv6: off [fixed]
    tx-checksum-fcoe-crc: off [fixed]
    tx-checksum-sctp: off [fixed]
scatter-gather: on
    tx-scatter-gather: on
    tx-scatter-gather-fraglist: on
tcp-segmentation-offload: on
    tx-tcp-segmentation: on
    tx-tcp-ecn-segmentation: off [fixed]
    tx-tcp-mangleid-segmentation: off
    tx-tcp6-segmentation: off [fixed]
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-gso-partial: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed] 

I don't know the network stack well and frankly "float" when setting up a network, so I ask for the help of the community, or at least a vector in which to dig. Of particular interest is this line RX errors 0 dropped 1501472982 overruns 0 frame 0


At this point, the matter is closed: I had too much confidence in the utilities ( wireshark , tshark , tcpdump ) in combination with the line dropped 1501472982 from ifconfig . What these dropped actually are (which are also present in the output of the mentioned programs) needs to be dealt with in each case separately, but these are not irretrievably lost packets.
In my case, it was enough to increase these kernel parameters:


On the existing data stream, the self-written code (a slightly modified version of this ) catches each packet, an attempt to measure the maximum throughput gave ~ 9.9Gb.

Scroll to Top