Pathetic smbfs/cifs Performance and Good gvfs Performance - Interesting Observations
Background:
For years I've been experiencing absolutely pathetic smbfs/cifs performance (10 MB/sec throughput) on my home network, running a mix of 32 and 64 bit Ubuntu systems. Typically I try to fix this 1-2 times per year, and give up, resorting to using FTP to ship large files around my network.
Today I once again tried to crack this nut, without success, but with some interesting findings...
Findings
0) I again tried using smbfs/cifs with the same results as always. I tried setting wsize and rsize larger (per many threads on this topic), without success. Nothing new here: smbfs and cifs suck as always.
1) My first interesting finding was when I tried mounting via Nautilus w/ gvfs instead of via fstab. When I use gfvfs, I get very solid performance, in the range of 65-75 MB/sec for large files (about what I would expect, given hard disk limitations).
2) I tried, without success, to see the mounting parameters that gvfs uses, in the hopes that all I need to do is replicate these in my fstab configuration. Despite a couple hours of poking through /proc and /var/log, and trying to use gvfs-info, I was not able to determine these.
3) Interesting Finding #1: I then tried watching things in Wireshark, and this is when things got interesting. If I am interpreting things correctly, wsize and rsize settings map to "Write AndX Response" messages in the file sharing protocol, and mount.cifs appears to limit wsize and rsize to a max of 4096. Setting them lower than that resulted in a change in messages caught in Wireshark, but setting them above 4096 did not. From this, I infer that mount.cifs limits wsize and rsize to a max of 4096. Nothing in the documentation about this though.
4) Interesting Finding #2: After observing the strange mount.cifs behavior, I then tried mounting via gvfs through Nautilus. I then observed traffic that indicated that gvfs sets rsize/wsize to 65536. Perhaps this is why performance is so much better with gvfs.
Notes:
Regarding items 3 and 4 above, I observe messages in Wireshark that read, "Write AndX Response, FID: 0x..., XXX bytes" where XXX is a max of 4096 for mount.cifs, and 65536 (not 65535) for gvfs.
Questions:
1) What's going on here? What does one have to do to get decent smb performance (besides going though Nautilus to use gvfs)?
2) Is there any way to force wsize and rsize to larger values than 4096 for mount.cifs?
Bookmarks