Originally Posted by
mrsteveman1
Hmmmm
Anything in the kernel log? (dmesg)
Nothing I can see. http://en.pastebin.ca/166483 has the output.
Originally Posted by
fjgaude
What does this show:
Code:
sudo mdadm -D /dev/mdx
with 'x' the number of your array.
You are using
mdadm to create the array?
Code:
netkom@iSCSI1:~$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Mon Nov 9 19:50:19 2009
Raid Level : raid5
Array Size : 937705728 (894.27 GiB 960.21 GB)
Used Dev Size : 312568576 (298.09 GiB 320.07 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Nov 9 21:51:29 2009
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : eaa61cc1:39b63f39:63ebcc44:4ad1de25 (local to host iSCSI1)
Events : 0.4
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 1 1 active sync /dev/sda1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
Yes, I'm using mdadm. And testing performance again gives
Code:
root@iSCSI1:/home/netkom# time dd if=/dev/zero of=/dev/md0 bs=1024 count=1M && time sync
1048576+0 records in
1048576+0 records out
1073741824 bytes (1,1 GB) copied, 76,6583 s, 14,0 MB/s
real 1m16.662s
user 0m0.380s
sys 0m13.540s
real 0m0.003s
user 0m0.000s
sys 0m0.000s
Out of couriosity I created a RAID 0 with just one disk. As far as I understood numbers shouldn't change to much. Here is the result:
Code:
root@iSCSI1:/home/netkom# mdadm --create /dev/md0 -l0 -n1 /dev/sda1 --force
mdadm: /dev/sda1 appears to be part of a raid array:
level=raid5 devices=4 ctime=Mon Nov 9 19:50:19 2009
Continue creating array? yes
mdadm: array /dev/md0 started.
root@iSCSI1:/home/netkom# dd if=/dev/zero of=/dev/md0 bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1,1 GB) copied, 51,9947 s, 20,7 MB/s
root@iSCSI1:/home/netkom# mdadm --manage /dev/md0 -S
mdadm: stopped /dev/md0
root@iSCSI1:/home/netkom# dd if=/dev/zero of=/dev/sda1 bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1,1 GB) copied, 17,3929 s, 61,7 MB/s
all that was in dmesg at that time was
Code:
[ 1633.318140] md: md0 stopped.
[ 1633.318171] md: unbind<sdb3>
[ 1633.318178] md: export_rdev(sdb3)
[ 1633.318453] md: unbind<sdd1>
[ 1633.318460] md: export_rdev(sdd1)
[ 1633.318474] md: unbind<sdc1>
[ 1633.318478] md: export_rdev(sdc1)
[ 1633.318490] md: unbind<sda1>
[ 1633.318495] md: export_rdev(sda1)
[ 1692.164933] md: bind<sda1>
[ 1692.196039] md0: setting max_sectors to 128, segment boundary to 32767
[ 1692.196051] raid0: looking at sda1
[ 1692.196055] raid0: comparing sda1(312568576) with sda1(312568576)
[ 1692.196059] raid0: END
[ 1692.196061] raid0: ==> UNIQUE
[ 1692.196063] raid0: 1 zones
[ 1692.196064] raid0: FINAL 1 zones
[ 1692.196069] raid0: done.
[ 1692.196071] raid0 : md_size is 312568576 blocks.
[ 1692.196074] raid0 : conf->hash_spacing is 312568576 blocks.
[ 1692.196076] raid0 : nb_zone is 1.
[ 1692.196079] raid0 : Allocating 8 bytes for hash.
[ 1835.615064] md: md0 stopped.
[ 1835.615094] md: unbind<sda1>
[ 1835.615103] md: export_rdev(sda1)
So I'm lost here... I couldn't find any evidence on the net that write performance is sooo bad using software raid.
Thanks
Bookmarks