I tried all sorts of things to solve this issue - it was happening in SMB as well.
My resolution was to move to the 32bit server - it now works as expected and as it is the pae kernel, i can use more than 4GB - what is not to like?
ice.
I tried all sorts of things to solve this issue - it was happening in SMB as well.
My resolution was to move to the 32bit server - it now works as expected and as it is the pae kernel, i can use more than 4GB - what is not to like?
ice.
I'm having good result with the following configuration:
4 IBM systemx, ALL with ubuntu 12.04.01 64 bit
server, /etc/exports
/home/cfduser/OpenFOAM 192.168.0.19(rw,sync,root_squash,no_subtree_check) 192.168.0.21(rw,sync,root_squash,no_subtree_check) 192.168.0.23(rw,sync,root_squash,no_subtree_check)
192.168.0.17:/home/cfduser/OpenFOAM /home/cfduser/OpenFOAM/ nfs4 _netdev,auto 0 0
clients /etc/fstab
192.168.0.17:/home/cfduser/OpenFOAM /home/cfduser/OpenFOAM/ nfs4 _netdev,auto 0 0
#192.168.0.17:/home/cfduser/OpenFOAM /home/cfduser/OpenFOAM/ nfs _netdev,nfsvers=3,proto=tcp,noac,auto 0 0
-----------------------------------------------------
writing a dd test on a local directory I get 90Gb sec. We have all the 4 nodes on one HP switch 1910.
If I run the dd test on a nfs directory, like
time dd bs=1M count=128 if=/dev/zero of=/home/cfduser/OpenFOAM/speedtest2 conv=fdatasync
I get a speed of 20 Gb/s
not so good, not so baad... (considering conv=fdatasync)
What are your values ?
Bookmarks