PDA

View Full Version : [ubuntu] NAS/home server-build



damorgue
March 6th, 2011, 07:13 PM
I intend to build a NAS/homeserver. I have decided to use either Debian or Ubuntu. I have also decided to build it now and not wait until btrfs, although I might switch to it at a later point but that is irrelevant for now. 64-bit versions of whatever i pick obviously. The objective is to use mdadm raid5, possibly with LVM on top of that.

Question 1:
Any major differences that I should care about between Debian and Ubuntu?

Question 2: Went with Ubuntu 10.04 LTS
Any major differences between 10.04 LTS, 10.10 and the soon to be relased 11.04 LTS? The differences, other than long term support, seem really small, like kernelversions and small revisions.

Parts List:
AMD Athlon II X2 240e >>> 2,8GHz Dual Core with ECC-support
ASUS M4A78LT-M LE >>> GigE, 6xSATA, ECC-support
Kingston KVR1333D3E9S/2GEF >>> 2x2GB, ECC, DDR3
PSU, case etc

I will most likely get a good NIC and a managed switch later on for bonding/LACP/link aggregation/teaming/trunking/802.3ad

Disks: (already have)
300GB or 8GB usbstick
2TB WD EARS

Question 3:
Any suggestion on suitable drive that is not 4 times as expensive as the normal desktopdrives like the WD RE disks? Went with the new 3-platter EARS
Recently manufactured WD EARS can not enable TLER. http://hardforum.com/showthread.php?t=1590200

Below is the plan of what I should do. I need help with the parts in BOLD and if you have any comments on the rest I would greatly appreciate it.

Set disks to ide-mode

Q4: Change TLER/CCTL/ERC?
Depends on what drive I pick. NOT needed for software raid in with md
Q5: Change headparking settings to disable such and avoid LCC-problem?
Depends on what drive I pick. wdidle3.exe in case of WD EARS. Did that

Once done, set disks back to ahci-mode

Q6: Install 64-bit Ubuntu Desktop on the 300GB and leave the default format settings and let it create swap etc by itself on the 300GB disk.
I intend to run the setup as follows...
/dev/sda 300GB
/dev/sdb 2TB
/dev/sdc 2TB
/dev/sdd 2TB
...where the 300GB is used for OS, programs, swap etc
and the latter 3 are used in a mdadm raid5 array "/dev/md0"

Q7: Format using parted or fpart, mbr or gpt?
parted seems to be the best. Partitioned correctly with parted

Q8: All of this? http://ubuntuforums.org/showpost.php?p=10366991&postcount=9
align at sector divisible by 8 in case of advanced format disk, some recommend 64, others 2048?
Went with 2048 sectors

After they have been formatted, I am planning to do this:
mdadm --create /dev/mdo --verbose --chunk=_ --level=6 --raid-devices=4 /dev/sd[xxxx]1

Q9: Create LV using LVM or a filesystem like xfs, ext2, ext3, ext4 etc?
mke2fs.
or mkfs.xfs
or another different filsystem
or something with LVM
(specify blocksize of 4k)Mount the /dev/md0 at /STORAGE

Q10: What to do with '/etc/mdadm/mdadm.conf'?
The settings have to be saved there to load them at startup, right?

Q11: Adding another 2TB disk to the array?
mdadm -add /dev/md0 /dev/sd_ #where _ is the new disk
mdadm -grow /dev/md0 -raid-disks=_ #where _ is the new number of disks in the array
Resize the filesystem across the new reshaped array?

Q12: Set up regular scrubbing of the array, cronjob perhaps?
Add this in cron.weekly:
echo check > /sys/block/md0/md/sync_action
I found out there is already such a scrub done monthly by default.

Q13: Share md0 using samba, cifs, nfs or something else?
Went with samba, works fine

Q14: Set up link aggregation?

https://help.ubuntu.com/community/UbuntuBonding (https://help.ubuntu.com/community/UbuntuBonding)
Or:
https://help.ubuntu.com/community/LinkAggregation
Or:
http://manpages.ubuntu.com/manpages/lucid/man4/lagg.4freebsd.html

Q15: What needs to have 'sudo' in front of it or administrator rights? I guess I'll notice as I go

damorgue
April 7th, 2011, 06:02 PM
Ok, I went ahead and have tried a lot since then, here is my current situation.

I went for raid6 isntead of raid5 to solve the problems with UREs during rebuild with these cheap-drives. Speaking of drives, i went with the WD20EARS-0MVWB0, The new 3-platter version of the 2TB. Below is a bench of one of them, they all perform the same.
http://img4.imageshack.us/img4/9315/singledisk31ncq.png

I have 4 of these 2TB-disks that I have created a raid6-array out of. It might seem stupid to put only four disks in raid6 but I plan to add more. I actually already have more of them but I wish to create the array and then move the data from a couple of other 2TB disks to the array, after which I can add the now empty disks to the array.

This setup is expected to perform as such:
Read: 90%*average readspeed of one disk*number of disks
0,9*100*4 = 360 MB/s

Write: 90%*average writespeed of one disk*(number of disks-2)
0,9*100*2 = 180 MB/s

Is this expectation correct?
Is 90% a reasonable efficiencyfigure that counts the losses in a raidarray?

I have tried creating the array with several different:
Chunksizes (aka stripesizes) from 64KB to 512KB
Readahead set to 65536 instead of standard value
stripe_cache_size set to everything from 256 to 32768
max ncq queue set to values between 1 and 31

I have a lot of graphs from my testing but I can summarize it like this:
Setting max quene to 1 on the disks in the array increases performance by a couple of percent. Readahead increased performance slightly. Stripe cache increase performance slightly further and chunksize at 256 or 512 seems to be the best performing. The differences between all these settings is however negligible. I am far from the performance some other people are getting from similar arrays as seen below.
http://img855.imageshack.us/img855/9330/512chunk1ncq32768cache.png

I am quite sure that I have formatted the disks correct since
A) the partitions begin at sector 2048 and since
B) a raid0-array is created with the same disks they perform as they should. They reach average reads and writes of about 350MB/s.
http://img146.imageshack.us/img146/2012/raid0256chunk31ncqtry2.png
This is close to 0,9*100*4 which is the expected performance. One can thus draw the conclusion that the partitions are aligned.

What have I missed?
Is there anything else I can do to increase performance?

If a mod reads this, feel free to move it to another category like server since I have done everything but the performancetests in the terminal. Might be more appropriate.

damorgue
April 7th, 2011, 11:17 PM
Are my expectations way off?
If the expected read-performance is:
0,9*(n-2)*(readspeed for one disk)
then the array performs quite excellent.

Edit: It would seem as both read and write performs 0,9*(n-2)*(speed) with raid6, stupid me.