Originally Posted by
mike4ubuntu
OMG, the performance of sftp is even worse. Can't get more than 4MB/s. At least I could get 11MB/s with Samba. Judging from all of the replies on other forums, it just seems that that's about the best that Samba can do. Is it just a factor of SAMBA code performance, or is there a setting that really caps the bandwidth?
I don't have samba on 20.04, but on 18.04, this version works:
Code:
samba 2:4.7.6+dfsg~ amd64
I see 65-75MBps transferring files FROM Windows to Ubuntu using SMB v2.1. Something isn't right on your systems. I don't know what it is. With 20.04, a number of samba defaults changed. At the same time, Win10 changed some of those defaults ... and some didn't line up to make it easy to install and just work.
I get better performance with NFS, but that is harder to use with Windows. I understand NFS is part of some Win10 versions. It was in Enterprise and Ultimate Win7 releases, but not Home or Professional. Getting the Windows UserID ---> Unix uid/gid mappings setup was always a hassle.
sshfs is dog slow, always. I only use it when a complex program/tool is on 1 system and the data to be input/acted on is on any other system, but not available over NFS. Moving that complex setup to the system that has the data would be too much effort, so sshfs can be used for access while I have lunch or sleep. That doesn't help Windows users. Too bad.
sftp is natively supported by nearly all Unix file managers. Just put sftp://..... as the URL into the file manager. Because it is encrypted, there is overhead, but nothing as bad as sshfs. That doesn't help Windows users. Too bad.
scp and rsync feel a little faster than sftp. Again, if I don't have access to either side of the storage using NFS, then I'll use those to transfer. scp for files inside 1 directory that aren't already on the other side and rsync when an entire directory structure is involved. That doesn't help Windows users. Too bad.
Windows has WinSCP as an scp and sFTP program. It integrates with MS-Explore - full drag-n-drop and has a 2-pane layout for source and target. Most end-users would figure this out.
I stopped using plain FTP around 2002. Security. If I need to make a file accessible to a few friends, I'll bury that file in a web server I control and send those friends the exact HTTPS link. Then I'll use 'at' to delete the file in 1-2 weeks automatically. I use 'at' to schedule future events all the time. Right now, 14 things are scheduled for this week currently.
To share a file, even a huge file, with one of my Linux buddies, I'll use Magic Wormhole. It is in the repos or there is a snap package if you only want to touch storage in your HOME directory (which I seldom desire). I use the APT package. Snagged a 31G set of files a few weeks ago from a friend. Left it running overnight.
Anyway - when it comes to performance issues, break apart the problem and work backwards. Fix issues in order.
- Check the network performance - iperf3
- Check the disk and file system performance fio/bonnie++
- Check the samba protocol performance - increase logging to see any issues.
Blaming Samba when the disk is about dead isn't exactly fair, especially when others are getting 65+ MB/sec (520Mbps) transfers.
Code:
$ time cp Home_Movie-2004.mkv /Data/K/
real 0m17.190s
user 0m0.007s
sys 0m1.722s
$ \du -s Home_Movie-2004.mkv
1913392 Home_Movie-2004.mkv
$ df -Th .
Filesystem Type Size Used Avail Use% Mounted on
//172.22.22.14/K cifs 74G 19G 55G 26% /Data/K
$ mount |grep /Data/K
//172.22.22.14/K on /Data/K type cifs (rw,relatime,vers=2.1,cache=strict,username=tf,uid=1000,forceuid,gid=1000,forcegid,addr=172.22.22.14,file_mode=0664,dir_mode=0775,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,bsize=1048576,echo_interval=60,actimeo=1)
Using Windows Exploder, saving files to CIFS storage I see about 50% of the GigE connection in Task Manager ... which is 62.5 MBps. Both disks involved are not SSDs, but old WD disks. Blue inside the Linux system and a true WD-Black in the Windows box. WD-Black drives are quality and fast in my experience. The 5 yr warranty doesn't hurt either.
Using the numbers in my actual transfer commands above, we can do some calculations:
1913392 bytes / 17.19 seconds = 113 MB/sec ... using clock time. That really isn't fair, because there is disk caching in RAM that makes it seem faster than it is. Plus, on a GigE network, 125MB/s is the theoretical max possible without using link aggregation.
Going in the opposite direction, it took 24.524s ... so that works out to 79.33 MB/sec. I cheated and stored the file on an SSD this time. Redoing it to the WD-Blue ... was about the same 25.8s.
Transferring lots of small files is drastically slower. That's about the overhead in creating files, allocations, sectors, etc.
Hopefully, this has provided some ideas for steps. I'm happy to post my smb.conf stuff, if that would help. I mount Windows storage using autofs, so that won't be too helpful. Also, I ran all the commands initiated from the Linux side because I know how to capture transfer data there, but not on Windows. Capturing random screens during the xfer just doesn't seem scientific.
Bookmarks