Originally Posted by
TheFu
i218
I have no issues at all with i210, i211 or Intel PRO/1000 drivers. These are usually igb and e1000e drivers.
In theory, the i218 device should be using the e1000e driver.
https://www.intel.com/content/www/us...-products.html
So you'll want to validate that. Lots of different ways - inxi -Nxx, lshw, lspci, ... Pay attention to the exact 3rd+ level versioning of the adapter and the driver. Really don't want to manually build and load a driver if that can be avoided. It would almost be better if the NIC was faulty. $25 for a new NIC makes this go away.
Sorry for the delay. I got sidetracked with other issues. I began exploring the driver issue along with a new nic. I bought a cheap Startech ST1000SPEXI, which uses the i210 chipset. Unfortunately, that didn't seem to make a difference according to iperf3. That adapter does appear to be using the igb driver.
lshw
Code:
*-network
description: Ethernet interface
product: I210 Gigabit Network Connection
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:04:00.0
logical name: enp4s0
version: 03
serial: e8:ea:6a:09:5a:ec
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=igb driverversion=5.6.0-k duplex=full firmware=3.25, 0x800005cd ip=10.1.10.20 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:37 memory:fa200000-fa2fffff ioport:c000(size=32) memory:fa300000-fa303fff memory:fa100000-fa1fffff
ifconfig
Code:
enp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.10.20 netmask 255.255.0.0 broadcast 10.1.255.255
inet6 fe80::5b6d:1f8f:31ea:34c1 prefixlen 64 scopeid 0x20<link>
ether e8:ea:6a:09:5a:ec txqueuelen 1000 (Ethernet)
RX packets 1302083 bytes 1796853540 (1.7 GB)
RX errors 0 dropped 10 overruns 0 frame 0
TX packets 625493 bytes 63957974 (63.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xfa200000-fa2fffff
Code:
iperf3 -V -b 1Gbps -c 10.1.10.50
iperf 3.7
Linux lic2u 5.8.0-53-generic #60~20.04.1-Ubuntu SMP Thu May 6 09:52:46 UTC 2021 x86_64
Control connection MSS 1460
Time: Tue, 25 May 2021 14:24:01 GMT
Connecting to host 10.1.10.50, port 5201
Cookie: kjzq2ygg5zytxko5ylywbxsrhul63r2qoarf
TCP MSS: 1460 (default)
Target Bitrate: 1000000000
[ 5] local 10.1.10.20 port 38898 connected to 10.1.10.50 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.54 MBytes 13.0 Mbits/sec 0 61.3 KBytes
[ 5] 1.00-2.00 sec 1.35 MBytes 11.3 Mbits/sec 0 61.3 KBytes
[ 5] 2.00-3.00 sec 1.38 MBytes 11.5 Mbits/sec 0 61.3 KBytes
[ 5] 3.00-4.00 sec 1.50 MBytes 12.6 Mbits/sec 0 61.3 KBytes
[ 5] 4.00-5.00 sec 1.35 MBytes 11.3 Mbits/sec 0 61.3 KBytes
[ 5] 5.00-6.00 sec 1.47 MBytes 12.3 Mbits/sec 0 61.3 KBytes
[ 5] 6.00-7.00 sec 1.35 MBytes 11.3 Mbits/sec 0 61.3 KBytes
[ 5] 7.00-8.00 sec 1.47 MBytes 12.3 Mbits/sec 0 61.3 KBytes
[ 5] 8.00-9.00 sec 1.35 MBytes 11.3 Mbits/sec 0 61.3 KBytes
[ 5] 9.00-10.00 sec 1.35 MBytes 11.3 Mbits/sec 0 61.3 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 14.1 MBytes 11.8 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 13.8 MBytes 11.6 Mbits/sec receiver
CPU Utilization: local/sender 1.2% (0.1%u/1.1%s), remote/receiver 0.0% (0.0%u/0.0%s)
snd_tcp_congestion cubic
iperf Done.
which is even slower than a NUC10 I have connected with wifi
Code:
NUC10 running Wifi:
[ 4] local 10.1.10.205 port 60992 connected to 10.1.10.50 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 28.5 MBytes 238 Mbits/sec
[ 4] 1.00-2.00 sec 31.1 MBytes 261 Mbits/sec
[ 4] 2.00-3.00 sec 31.5 MBytes 264 Mbits/sec
[ 4] 3.00-4.01 sec 34.9 MBytes 292 Mbits/sec
[ 4] 4.01-5.00 sec 35.8 MBytes 301 Mbits/sec
[ 4] 5.00-6.00 sec 34.4 MBytes 288 Mbits/sec
[ 4] 6.00-7.00 sec 35.4 MBytes 297 Mbits/sec
[ 4] 7.00-8.00 sec 32.9 MBytes 276 Mbits/sec
[ 4] 8.00-9.01 sec 32.0 MBytes 268 Mbits/sec
[ 4] 9.01-10.00 sec 34.6 MBytes 291 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 331 MBytes 278 Mbits/sec sender
[ 4] 0.00-10.00 sec 331 MBytes 278 Mbits/sec receiver
CPU Utilization: local/sender 18.6% (2.3%u/16.2%s), remote/receiver 0.0% (0.0%u/0.0%s)
According, to this, it would just be better to put a Wifi card into the SAMBA server. To verify, I copied a 250GB file from another Windows server to the NUC10 running Win 10 as well, and got 50MBytes/sec pretty consistently.
Bookmarks