Page 1 of 3 123 LastLast
Results 1 to 10 of 24

Thread: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

  1. #1
    Join Date
    Feb 2009
    Beans
    4

    Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    Pathetic smbfs/cifs Performance and Good gvfs Performance - Interesting Observations

    Background:

    For years I've been experiencing absolutely pathetic smbfs/cifs performance (10 MB/sec throughput) on my home network, running a mix of 32 and 64 bit Ubuntu systems. Typically I try to fix this 1-2 times per year, and give up, resorting to using FTP to ship large files around my network.

    Today I once again tried to crack this nut, without success, but with some interesting findings...


    Findings

    0) I again tried using smbfs/cifs with the same results as always. I tried setting wsize and rsize larger (per many threads on this topic), without success. Nothing new here: smbfs and cifs suck as always.

    1) My first interesting finding was when I tried mounting via Nautilus w/ gvfs instead of via fstab. When I use gfvfs, I get very solid performance, in the range of 65-75 MB/sec for large files (about what I would expect, given hard disk limitations).

    2) I tried, without success, to see the mounting parameters that gvfs uses, in the hopes that all I need to do is replicate these in my fstab configuration. Despite a couple hours of poking through /proc and /var/log, and trying to use gvfs-info, I was not able to determine these.

    3) Interesting Finding #1: I then tried watching things in Wireshark, and this is when things got interesting. If I am interpreting things correctly, wsize and rsize settings map to "Write AndX Response" messages in the file sharing protocol, and mount.cifs appears to limit wsize and rsize to a max of 4096. Setting them lower than that resulted in a change in messages caught in Wireshark, but setting them above 4096 did not. From this, I infer that mount.cifs limits wsize and rsize to a max of 4096. Nothing in the documentation about this though.

    4) Interesting Finding #2: After observing the strange mount.cifs behavior, I then tried mounting via gvfs through Nautilus. I then observed traffic that indicated that gvfs sets rsize/wsize to 65536. Perhaps this is why performance is so much better with gvfs.


    Notes:

    Regarding items 3 and 4 above, I observe messages in Wireshark that read, "Write AndX Response, FID: 0x..., XXX bytes" where XXX is a max of 4096 for mount.cifs, and 65536 (not 65535) for gvfs.



    Questions:

    1) What's going on here? What does one have to do to get decent smb performance (besides going though Nautilus to use gvfs)?

    2) Is there any way to force wsize and rsize to larger values than 4096 for mount.cifs?
    Last edited by user667; September 20th, 2010 at 02:35 AM. Reason: Grammer

  2. #2
    Join Date
    Aug 2006
    Beans
    Hidden!
    Distro
    Ubuntu 9.04 Jaunty Jackalope

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    Exactly the same issue here. Thanks for your findings.

    Read speeds:
    mount: 2.6MB/s
    gvfs: 25MB/s

    I seem to remember performance had been quite decent earlier, so am inclined to think a fairly recent update causes this issue.

    My findings:
    #1 I only suffer from this issue on my desktop and not my laptop, thought they both run lucid x64, so this issue may be hardware related?

    Board: Gigabyte MA-790FXT UD5P Motherboard + Phenom II 1090T
    LAN: 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03)
    Last edited by Jaapie; September 21st, 2010 at 09:25 AM. Reason: Added hardware

  3. #3
    Join Date
    May 2006
    Beans
    225
    Distro
    Ubuntu 10.10 Maverick Meerkat

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    I'm seeing the same and I'm not sure what's to blame. Windows machines can access at full Gigabit speeds, and while much slower, at least gvfs is still superior to mounting the cifs share. I'm seeing 50MB/s both ways on gvfs, and about 30/15 with mount. Card is Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) on the e1000e driver.

  4. #4
    Join Date
    Feb 2009
    Beans
    4

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    I've had this issue with multiple NICs. I really don't think it is a NIC issue.

  5. #5
    Join Date
    Apr 2005
    Beans
    4

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    Well, I have the exact opposite results: gvfs slooooooow (~6MB/s) and cifs kernel module (mount.cifs) a decent speed (~55MB/s), even if it should be better (the I/O subsystem can achieve at least 100MB/s).
    Out of curiosity, how did you perform the test? dd? what kernel are you using? what samba version on the other side? thanks

  6. #6
    Join Date
    Sep 2010
    Location
    Indian Capital City
    Beans
    916
    Distro
    Ubuntu 14.04 Trusty Tahr

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    Did you had a chance to go through this:
    http://ubuntuforums.org/showthread.php?t=1429532

    I was just searching through the forum to get some answer to my issue and came through your thread and the one posted above. Thought it cud help
    When you have eliminated the impossible, whatever remains, however improbable, must be the truth !!
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Mark it [SOLVED] if the issue has been resolved

  7. #7
    Join Date
    May 2006
    Beans
    225
    Distro
    Ubuntu 10.10 Maverick Meerkat

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    I tried the various wsize and rsize settings they had, remounted the share, but no changes in speed whatsoever! Switching to NFS wouldn't be a true solution.

  8. #8
    Join Date
    May 2006
    Beans
    225
    Distro
    Ubuntu 10.10 Maverick Meerkat

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    Some new information. I opened up Wireshark and saw many, many QUERY_PATH_INFO transmissions. It's very strange because they reference all sorts of locations. Maybe something Nautilus is doing? I notice this whenever I navigate through the share. If I'm transferring files I do not notice this.

    I'm using the e1000e driver for my NIC, perhaps this could be part of the problem?
    Last edited by Enigmatic; October 15th, 2010 at 12:16 AM.

  9. #9
    Join Date
    May 2006
    Beans
    225
    Distro
    Ubuntu 10.10 Maverick Meerkat

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    Tried NFS - get 70MB/s both ways. Guess it's time to dump cifs until it gets fixed.

  10. #10
    Join Date
    May 2008
    Location
    Bucharest, Romania
    Beans
    63
    Distro
    Ubuntu

    Re: Pathetic (SLOW) smbfs/cifs Performance and Good gvfs Performance: Some Findings

    It's not CIFS related, it's Nautilus. To test it just copy the files using cp command from terminal, you will get a 10X boost in speed. You can also use DoubleCommander.

Page 1 of 3 123 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •