Hi,
I have a HP ProLiant ML350 G6 with two Intel Xeon E5606 CPUs and 64GB of RAM.
I have connected 6x WesternDigital Black 2TB drives in the SATA-connectors on the motherboard.
I made a raidz ZFS pool with all six drives but I'm having a little problem with the write performance.
If I run this:I get:Code:dd if=/dev/zero of=testfile conv=fdatasync bs=384k count=1k
First of all it jumps up and down pretty much. Second of all, shouldn't I get more write speed when having 6 drives in raidz or is this normal?root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 5.22462 s, 20.1 MB/s
root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.73841 s, 60.3 MB/s
root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.41069 s, 74.3 MB/s
root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.30766 s, 80.2 MB/s
root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.97408 s, 53.1 MB/s
root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.20755 s, 86.8 MB/s
root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.44293 s, 72.7 MB/s
root@SERVER:/tank# dd if=/dev/zero of=testfile conv=fdatasync bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 3.13647 s, 33.4 MB/s
If I run the same dd command on a ext4 formated drive I get around 40-50MB/s.
And when I try to copy files from a faster drive to the zpool I get terrible performance. It's usually around 15MB/s and pauses sometimes, then it can go faster for a while and then pauses again etc.
This is the sector size of every disk: Sector size (logical/physical): 512 bytes / 512 bytes
This is the block size of every disk (blockdev --getbsz /dev/sdb1): 4096
They seem to run in 3.0Gb/s mode: SATA Version is: SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s)
This pool is going to be used with rTorrent. When I download a torrent which I know can get high speed from it can go up to 90MB/s and then it just hangs and drops down to 10MB/s for a while and climbs up to around 30MB where it stays until it's done.
I have a 1Gbit connection btw. If I try it on my other HP drive (hot-swap 900GB 10k rpm non-raid) I get up to around the same top speed but it stays up there until it's done and doesn't hang or anything.
Does anyone know why the pool hangs like that? It is the drives or the controller maybe? Or is is something with ZFS itself?
Bookmarks