-
HOWTO: Linux Software Raid using mdadm
1) Introduction:
Recently I went out and bought myself a second hard drive with the purpose of setting myself up a performance raid (raid0). It took me days and a lot of messing about to get sorted, but once I figured out what I was doing I realised that it's actually relatively simple, so I've written this guide to share my experiences :) I went for raid0, because I'm not too worried about loosing data, but if you wanted to set up a raid 1, raid 5 or any other raid type then a lot of the information here would still apply.
2) 'Fake' raid vs Software raid:
When I bought my motherboard, (The ASRock ConRoeXFire-eSATA2), one of the big selling points was an on board raid, however some research revealed that rather than being a true hardware raid controller, this was in fact more than likely what is know as 'fake' raid. I think wikipedia explains it quite well: http://en.wikipedia.org/wiki/Redunda...ependent_disks
Quote:
Hybrid RAID implementations have become very popular with the introduction of inexpensive RAID controllers, implemented using a standard disk controller and BIOS (software) extensions to provide the RAID functionality. The operating system requires specialized RAID device drivers that present the array as a single block based logical disk. Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids", and have almost all the disadvantages of both hardware and software RAID.
After realising this, I spent some time trying to get this fake raid to work - the problem is that although the motherboard came with drivers that let windows see my two 250 GB drives as one large 500 GB raid array, Ubuntu just saw the two separate drives and ignored the 'fake' raid completely. There are ways to get this fake raid working under linux, however if you are presented with this situation then my advice to you is to abandon the onboard raid controller and go for software raid instead. I've seen arguments as to why software raid is faster and more flexible, but I think the best reason is that software raid is far easier to set up! :)
3) The Basics of Linux Software Raid:
For the basics of raids try looking on Wikipedia again: http://en.wikipedia.org/wiki/Redunda...ependent_disks. I don't want to discuss it myself because its been explained many times before by people who are far more qualified to explain it than I am. I will however go over a few things about software raids:
Linux software raid is more flexible than hardware raid or true raid because rather than forming a raid arrays between identical disks, the raid arrays are created between identical partitions. As far as I understand, if you are using hardware raid between (for example) two disks, then you can either create a raid 1 array between those disks, or a raid 0 array. Using software raid however, you could create two sets of identical partitions on the disks, and for a raid 0 array between two of those partitions, and a raid 1 array between the other two. If you wanted to you could probably even create a raid array between two partitions on the same disk! (not that you would want to!)
The process of setting up the a raid array is simple:
- Create two identical partitions
- Tell the software what the name of the new raid array is going to be, what partitions we are going to use, and what type of array we are creating (raid 0, raid 1 etc...)
Once we have created this array, we then format and mount it in a similar way to the way we would format a partition on a physical disk.
4) Which Live CD to use:
You want to download and burn the alternate install Ubuntu cd of your choosing, for example, I used:
Code:
ubuntu-6.10-alternate-amd64.iso
If you boot up the ubuntu desktop live CD and need to access your raid, then you will need to install mdadm if you want to access any software raid arrays:
Code:
sudo apt-get update
sudo apt-get install mdadm
Don't worry too much about this for now - you will only need this if you ever use the Ubuntu desktop cd to fix your installation, the alternate install cd has the mdadm tools installed already.
5) Finally, lets get on with it!
Boot up the installer
Boot up the alternate install CD and run through the text based installation until you reach the partitioner, and select "Partition Manually".
Create the partitions you need for each raid array
You now need to create the partitions which you will (in the next step) turn into software raid arrays. I recommend using the space at the start, or if your disks are identical, the end of your disks. That way once you've set one disk up, you can just enter exactly the same details for the second disk. The partitioner should be straightforward enough to use - when you create a partition which you intend to use in a raid, you need to change the type to "Linux RAID Autodetect".
How you partition your installation is upto you, however there are a few things to bear in mind:
- If (like me) you are going for a performance raid, then you will need to create a separate /boot partition, otherwise grub wont be able to boot - it doesn't have the drivers needed to access raid 0 arrays. It sounds simple, but it took me so long to figure out.
- If, on the other hand, you are doing a server installation (for example) using raid 1 / 5 and the goal is reliability, then you probably want the computer to be able to boot up even if one of the disks is down. In this situation you need to do something different with the /boot partition again. I'm not sure how it works myself, as I've never used raid 1, but you can find some more information in the links at the end of this guide. Perhaps I'll have a play around and add this to the guide later on, for completeness sake.
- If you are looking for performance, then there isn't a whole load of point creating a raid array for swap space. The kernel can manage multiple swap spaces by itself (we will come onto that later).
- Again, if you are looking for reliability however, then you may want to build a raid partition for your swap space, to prevent crashes should one of your drives fail. Again, look for more information in the links at the end.
On my two identical 250 GB drives, I created two 1 GB swap partitions, two +150 GB partitions (to become a raid0 array fro my /home space), and two +40 GB partitions (to become a raid 0 array for my root space), all inside an extended partition at the end of my drives. I then also created a small 500 MB partition on the first drive, which would become my /boot space. I left the rest of the space on my drives for ntfs partitions.
Assemble the partitions as raid devices
Once you've created your partitions, select the "Configure software raid" option. The changes to the partition table will be written to the disk, and you will be allowed to create and delete raid devices - to create a raid device, simply select "create", select the type of raid array you want to create, and select the partitions you want to use. Remember to check which partition numbers you are going to use in which raid arrays - if you forget, hit <Esc> a few times to bring you back to the partition editor screen where you can see whats going on.
Tell the installer how to use the raid devices
Once you are done, hit finish - you will be taken back to the partitioner where you should see some new raid devices listed. Configure these in the same way you would other partitions - set them mounts points, and decide on their filesystem type.
Finish the instalation
Once you are done setting up these raid devices (and swap / boot partitions you decide to keep as non-raid), the installation should run smootly.
6) Configuring Swap Space
I mentioned before that the linux kernel automatically manages multiple swap partitions, meaning you can spread swap partitions across multiple drives for a performance boost without needing to create a raid array. A slight tweak may be needed however; each swap partition has a priority, and if you want the kernel to use both at the same time, you need to set the priority of each swap partition to be the same. First, type
to see your current swap usage. Mine outputs the following:
Code:
Filename Type Size Used Priority
/dev/sda5 partition 979956 39080 -1
/dev/sdb5 partition 979956 0 -2
As you can see, the second swap partition isn't being used at the moment, and won't be until the first one is full. I want a performance gain, so I need to fix this by setting the priority of each partition to be the same. Do this in /etc/fstab, by adding pri=1 as an option to each of your swap partitions. My /etc/fstab file now looks like this:
Code:
# /dev/sda5
UUID=551aaf44-5a69-496c-8d1b-28a228489404 pri=1 swap sw 0 0
# /dev/sdb5
UUID=807ff017-a9e7-4d25-9ad7-41fdba374820 pri=1 swap sw 0 0
7) How to do things manually
As I mentioned earlier, if you ever boot into your instalation with a live cd, you will need to install mdadm to be able to access your raid devices, so its a good idea to at least roughly know how mdadm works. http://man-wiki.net/index.php/8:mdadm has some detailed information, but the important options are simply:
Code:
- -A, --assemble
Assemble a pre-existing array that was previously created with --create. - -C, --create
Create a new array. You only ever need to do this once, if you try to create arrays with partitions that are part of other arrays, mdadm will warn you. - --stop
Stop an assembled array. The array must be unmounted before this will work.
When using --create, the options are:
Code:
mdadm --create md-device --chunk=X --level=Y --raid-devices=Z devices
- -c, --chunk=
Specify chunk size of kibibytes. The default is 64. - -l, --level=
Set raid level, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp, fautly. - -n, --raid-devices=
Specify the number of active devices in the array.
for example:
Code:
mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1
will create a raid0 array /dev/md0 formed from /dev/sda1 and /dev/sdb1, with chunk size 4.
When using --assemble, the usage is simply:
Code:
mdadm --assemble md-device component-devices
for example
Code:
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
Which will assemble the raid array /dev/md0 from the partitions /dev/sda1 and /dev/sdb1
Alternatively you can use:
Code:
mdadm --assemble --scan
and it will assemble any raid arrays it can detect automatically.
Lastly,
Code:
mdadm --stop /dev/md0
will stop the assembled array md0, so long as its not mounted.
If you wish you can set the partitions up yourself manually using fdisk and mdadm from the command line. Either boot up a desktop live cd and apt-get mdadm as described before, or boot up the alternate installer and hit escape until you see a list of the different stages of instalation - the bottom one should read execute shell - which will drop you at a shell with fdisk, mdadm and mkfs etc... available.
Note that if you ever need to create another raid partition, you create filesystems on them in exactly the same way you would a normal physical partition. For example, to create an ext3 filesystem on /dev/md0 I would use:
And to create a swapspace on /dev/sda7 I woud use:
Lastly, mdadm has a configuration file located at
Code:
/etc/mdadm/mdadm.conf
this file is usually automatically generated, and mdadm will probably work fine without it anyway. If you're interested then http://man-wiki.net/index.php/5:mdadm.conf has some more information.
And that's pretty much it. As long as you have mdadm available, you can create / assemble raid arrays out of identical partitions. Once you've assembled the array, treat it the same way you would a partition on a physical disk, and you can't really go wrong! :)
I hope this has helped someone! At the moment I've omitted certain aspects of dealing with raids with redundancy (like raid 1 and raid 5), such a rebuilding failed arrays, simply because I've never done it before. Again, I may have a play around and add some more information later (for completeness), or if anyone else running a raid 1 wants to contribute, it would be most welcome.
Other links
The Linux Software Raid Howto:
http://tldp.org/HOWTO/Software-RAID-HOWTO.html
This guide refers to a package "raidtools2" which I couldn't find in the Ubuntu repositories - use mdadm instead, it does the same thing.
Quick HOWTO: Linux Software Raid
http://www.linuxhomenetworking.com/w..._Software_RAID
Using mdadm to manage Linux Software Raid arrays
http://www.linuxdevcenter.com/pub/a/...2/05/RAID.html
Ubuntu Fake Raid HOWTO In the community contributed documentation
https://help.ubuntu.com/community/Fa...ght=%28raid%29
I hope this guide helps people understand and set up their raids :)
-
Re: HOWTO: Linux Software Raid using mdadm
Thanks very much for this - it worked flawlessly!
-
Re: HOWTO: Linux Software Raid using mdadm
Great howto! Please consider adding this to the community documentaion wiki. There is currently nothing in there (at least that I could find) on the use of mdadm.
-MG
-
Re: HOWTO: Linux Software Raid using mdadm
Excellent information and really useful as I am about to setup a RAID 5 myself and very much needed some pointers. I already have a working install of Feisty so I it looks like I will have to install mdadm and go from there. I'll let you know if it works out ;)
-
Re: HOWTO: Linux Software Raid using mdadm
so i followed your guide, and i did the manual configuration because the minimal server installer CD was giving me problems.
so I created /dev/md0 and used mdadm --create to create a RAID0.
I made use of your note about RAID0 and boot partitions, so I created one on my first disk and made sure the two disks matched and all.
The install went perfectly fine, but when I reboot i get:
Code:
Starting up ...
Loading, please wait...
mdadm: No devices listed in conf file were found.
and nothing happens.
where is the config file stored? how do i get it to reassemble my raid disks on boot?
thanks!
-
Re: HOWTO: Linux Software Raid using mdadm
My feisty install hangs at the exact same spot when one of the discs of my raid5 array is missing. (which I don't understand, since it should still be able to run on the 2 remaining drives, since that's the whole point of raid5) You might try using the instructions on accessing your array via the desktop live cd, to make sure the array is indeed there and not corrupted somehow. If it's corrupted, with a raid0 array, you'd be missing your whole root fs, and the kernel would have nothing else to load.
-
Re: HOWTO: Linux Software Raid using mdadm
I just discovered that if you let it sit, it'll timeout and dump you to the initramfs prompt, and from there you should be able to see exactly why it's not booting correctly.
My guess is that mdadm is trying to build the array before udev has found all the devices though, so your raid0 array isn't initializing. And therefor, you have no / filesystem.
At least this is what I gleaned from the bug reports about similar symptoms.
-
Re: HOWTO: Linux Software Raid using mdadm
Thanks for this howto!! This is great. I wish I knew that Feisty alternate-install (x86-64) CD didn't have an installer option for software RAID before I nuked my newly installed LTSP server :( Oh well. A RAID-1/5 HOWTO would be awesome. Maybe I'll try and write one up, although I'm not very experienced at writing HOWTOs.
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
gnychis
so i followed your guide, and i did the manual configuration because the minimal server installer CD was giving me problems.
so I created /dev/md0 and used mdadm --create to create a RAID0.
I made use of your note about RAID0 and boot partitions, so I created one on my first disk and made sure the two disks matched and all.
The install went perfectly fine, but when I reboot i get:
Code:
Starting up ...
Loading, please wait...
mdadm: No devices listed in conf file were found.
and nothing happens.
where is the config file stored? how do i get it to reassemble my raid disks on boot?
thanks!
bahhh... i fixed my problem by removing 'savedefault' from the grub config
-
Re: HOWTO: Linux Software Raid using mdadm
-
Re: HOWTO: Linux Software Raid using mdadm
Question:
I have 2 disks atm which I want to make into one software raid (RAID0)
If at a later time I get a third new disk can I append this to the same raid without loosing any data in the current raid?
-
Re: HOWTO: Linux Software Raid using mdadm
I have Raid5 setup with 3 250GB ATA Discs. Two of the disk are Maxtor 250GB and one is 250GB Segate. Maxtor discs are a bit bigger than Seagate.
But...
There is something wrong with my setup. Once I reboot my system, array is broken. mdadm says two off the three dics are broken? Where should I start searching for error.
Here's my system
AMD Duron 700MHz
512M SDRAM
200GB System
3x250GB RAID
2x Promise PCI ATA controller
Ubuntu 7.04 Server.
mobo has its own HPT730 RAID controller. (Don't remenber model)
And sorry for my bad english.
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
Klingon
I have 2 disks atm which I want to make into one software raid (RAID0)
If at a later time I get a third new disk can I append this to the same raid without loosing any data in the current raid?
Although I haven't done it myself, I think you can. Check out http://ubuntuforums.org/showthread.p...&highlight=md0
Cocodude
-
Re: HOWTO: Linux Software Raid using mdadm
Receiving an error saying device in use when running this command?
Quote:
mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1
Use --verbose to be promoted to force the "device busy" error.
-
Re: HOWTO: Linux Software Raid using mdadm
I have gutsy server version and can't get software RAID to work. mdadm isn't even installed. If I try to apt-get it, I get:
root@server:/usr/bin# apt-get install mdadm
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package mdadm is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package mdadm has no installation candidate
apt-get update works fine when I set everything to New Zealand servers (the Australian repository is always down - thanks for that Optus). I tried manually downloading mdadm and doing an install but make isn't even installed on gutsy. Installing make doesn't work either because of other dependencies, and further down the rabbit hole I go.
What am I doing wrong?
-
Re: HOWTO: Linux Software Raid using mdadm
Hi All
i want to setup ubuntu 6.10 with software raid on a Sun Qube 3
is there a little howto for help?
the qube does not support virtual tty's
hope to get it alive
how can a raid be rebuild after a crash easyly?
thanks tom
germany
-
Re: HOWTO: Linux Software Raid using mdadm
Please, don't forget the all important step of adding your new md array to the fstab file. This will ensure your array is restarted at bootup. I thought this was worth mentioning again (the original post somewhat glossed over it).
here is a perfect example
-
Re: HOWTO: Linux Software Raid using mdadm
I am going to say thank you because you have pointed me in the right direction.
I am using an ASRock m/b and was trying to use the on board RAID. If I do that I see no drives in Ubuntu.
Using Non-RAID settings of the m/b at lease allows Ubuntu to see the disks. I have not been able to bet installed yet but I am heading in the right direction
I will update when I have got things working
-
Re: HOWTO: Linux Software Raid using mdadm
Further to my earlier comments it is an issue with Dapper.
I was trying to use Dapper because it is the LTS version, but Dapper only sees the first SATA drive. No matter how many or where they are plugged only the first available drive is shown.
I know I can see them all in Gutsy - trouble is at the moment I cannot get the Live CD to install. I get the first menu up but then it sits and goes nowhere.
I am at this very moment (on another machine) downloading the alternate CD so I might update this post later.
Bye for now!!
-
Re: HOWTO: Linux Software Raid using mdadm
Once again thanks to kragen for this thread.
However......
I wish it were that simple. I am now on my umpteenth attempt at setting up a server with a mirrored RAID.
My history so far.
I want to set up a web server with a RAID so that I can have some measure of security for the sites I am going to host. So initially I thought I will use DAPPER as this is the LTS offering. Only to find that Dapper cannot cope with more than 1 SATA drive - it does not seem to recognise the others. What ho, never mind. I look at Plesk website and they offer support on Gutsy as well.
So I download the image from the Ubuntu website for AMD64. I took the standard version. Ran the CD which gave me the option to install and it just sat there looking at me. I tried all the other options on the menu - the only one that did something was the check CD option.
Never mind, maybe it corrupted on its way to the CD, so burnt another copy. Same - no options would go anywhere.
Well maybe it corrupted during the download. Downloaded another copy from a different mirror. Burnt the CD - just the same. (if anyone wants a CD I have a few spare!!!)
I found the alternate version for the AMD64 desktop on the website and downloaded. Burnt the image to disk. Ran the installer. It worked!!!
Next I got to the partitioner to find that I had got a disk missing. Checked all connections and restated the installer. I found all my disks. Then set about partitioning.
OK I have three disks. 1 x 80Gb which I am using to install OS and normal stuff. 2 x 500Gb which are going to be mirrored for the sites.
This is where I differ from kragen in his (I assume his and not her - apologies if I have wrong gender) comments.
and this is where I am going to be slightly critical. Kragen sets up how he is going to partition his disk. 1Gb for swap, 150Gb for RAID / home, 40Gb for root, some space for boot. My criticism is that for someone who has never used a partitioner there is insufficient detail.
for example. Go to disk 1. set up partition by a) stating the format to be used, b) saying how it is to be formatted, c) giving it the mount point and so on.
On my system I have set partitions on the disks I am using for RAID as being Physical volume for RAID and I do this on both but then I have to set a RAID partition giving the same details - or so I think.
When I have worked out precisely what I have had to do I will create a new thread - referencing kragen .
I have twice got past the point of partitioning the disks only to have the install fail further down the line. When I have nailed the little ...... I will wax lyrical once again.
Bye for now
-
Re: HOWTO: Linux Software Raid using mdadm
I am trying to put together a raid1 configuration using two drives one of which is 200g and the other is 250g. I have one using the whole drive ending on cylinder 23241 and then created two partitions on the second, one ending at 23241 also and the other taking up the rest of the drive.
First off, should I even be trying this with two different drives (the only physical difference according to hdparm is a buffer of 8MB on one and 2MB on t'other)?
Secondly, and my problem now, is that mdadm stalls at 21.9% and slowly brings the computer to a freeze while doing a resync. Can anyone help me?
-
Re: HOWTO: Linux Software Raid using mdadm
Next time someone just remind me to check my logs. There must have been 200 read/write errors on one of the drives . . . that'll do it.
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
mgrusin
Great howto! Please consider adding this to the community documentaion wiki. There is currently nothing in there (at least that I could find) on the use of mdadm.
-MG
There is now i started writing this
https://help.ubuntu.com/community/In...n/SoftwareRAID
It's not complete but that's coming
-
Re: HOWTO: Linux Software Raid using mdadm
This has helped me more than once. I switched motherboards and didn't realize that it changed hda, etc., to sda, etc., so after a lot of tooling around with it, this guide helped me figure that out!
-
Re: HOWTO: Linux Software Raid using mdadm
I have software RAID, different levels (0 or 1), on several partitions, one of which is empty. I want to install on the empty one, but I can't see how to do this without nuking the whole array with the installer. If I chroot with the alternate CD, I see my md devices, likewise if I install mdadm on the desktop CD, but neither installer will show the md devices.
-
Re: HOWTO: Linux Software Raid using mdadm
I am not sure I understand - do you already have a version of ubuntu installed on partition/s in your box?
or are you simply booting from live CD and are weary to continue the install to your HD? and are now trying to add the mdadm package at this stage?
the mdadm you "install" to the desktop Live CD is actually installed to ram as the CD is not altered during the live CD bootup - to create raid devices and mirrors at this stage the alternate install CD is needed
what I think you are trying to do is install ubuntu from a liveCD and simultaneously maintain the integrity of the other data and OS's on your box by installing ubuntu to a spare partition - being careful - right?
let us know if this is the case?
regards
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
crtlbreak
I am not sure I understand - do you already have a version of ubuntu installed on partition/s in your box?
or are you simply booting from live CD and are weary to continue the install to your HD? and are now trying to add the mdadm package at this stage?
the mdadm you "install" to the desktop Live CD is actually installed to ram as the CD is not altered during the live CD bootup - to create raid devices and mirrors at this stage the alternate install CD is needed
what I think you are trying to do is install ubuntu from a liveCD and simultaneously maintain the integrity of the other data and OS's on your box by installing ubuntu to a spare partition - being careful - right?
let us know if this is the case?
regards
Yes, that is exactly what I want to do. I have Ubuntu installed, but I want to do a parallel installation, so I could boot into either. I want to install a separate OS without messing up the current one.
I understand that when I install mdadm on a live CD, it only exists in RAM. I was just hoping that I could trick the installer into recogizing the md devices.
-
Re: HOWTO: Linux Software Raid using mdadm
grub will manage the separate installations quite fine if they are on different partitions - also not sure if you are installing the same OS version eg 8.04 hardy heron twice but on different partitions - I think that might need a bit more research?
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
crtlbreak
grub will manage the separate installations quite fine if they are on different partitions - also not sure if you are installing the same OS version eg 8.04 hardy heron twice but on different partitions - I think that might need a bit more research?
I'll deal with grub when I come to that. I'm not at all worried about grub. I put /boot on it's own partition so I can do this sort of thing.
I want / on a RAID0 device. /boot, /home, and / for the already installed OS are on their own RAID devices (RAID 0 or 1). My roadblock is that the Ubuntu installer will only install to physical partitions (e.g. /dev/sda3) but not general block devices (e.g. /dev/md2), OR it will install to a softraid device but requires destroying the disk. I can't find a workaround, but it seems like a dumb limitation, so I'm hoping there is one.
I have Hardy installed, and I want to install Gutsy.
This is for testing the upgrade procedure to Hardy, because I encountered a problem and I want to see if it's repeatable. After coming across this problem, it is now just as much for my own curiosity as it is for investigating the upgrade. I can't understand why such a limitation would be built into the installer. What if I nuked my OS? I would have to nuke my disk (along with my /home) to reinstall with softraid as well?
-
Re: HOWTO: Linux Software Raid using mdadm
from the "alternate" installation CD the installer will read your MD devices and you can choose to "do not format the partition" under the installation process. If you are installing on to a newly created partition/LVM/blockdevice then your existing partitions and data integrity should be unaffected if the correct selections have been made under the installation process.
you are in the classic situation where trying to solve or investigate one issue has created another far greater and more complex issue to try and resolve - how many times have we all been there before??:rolleyes::rolleyes:
a possible suggestion is to use an alternate physical disk rather than mess up your existing system
I will always apply my five rules
5 Rules of life
- Have you made a backup or can you recover?
- If its not broken - try and fix it till its broken as it probably needed replacing anyway
- Do not try and destroy one working object in the possibility of repairing another - you will end up with two broken objects.
- Paddy's law says that Murphy was an optimist.
- No major changes on a Friday - especially after lunchtime
The rules are best applied in order - I think you are possibly past rule 1 entering in to rule 2? :-\"#-o
-
random RAID mdadm daemon checks?
I decided to switch from SuSE to Ubuntu, and I am configuring my first production MySQL server with 8.04.
I am using software RAID 1.
The machine is not yet deployed, it's sitting in the office.
Suddenly, i hear loud fan noise, even though there are no tasks running, and I check the load average: 2.8! Using "top" I notice that
md2_resync process is the culprit.
Then I look at /proc/mdstat and see:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[0] sdb3[1]
135347520 blocks [2/2] [UU]
[============>........] check = 62.9% (85253248/135347520) finish=7.9min speed=125122K/sec
So apparently, mdadm daemon decided to check the array.
How can i disable or control the schedule of these "checks" without having to disable mdadm daemon alltogether in /etc/init.d?
It's not acceptable for a production database machine to suddenly start a maintanence process that saturates its I/O bandwidth.
-
Re: random RAID mdadm daemon checks?
I've never seen my array do that. Are you sure there wasn't a hw prob with one of the drives?
-
Re: HOWTO: Linux Software Raid using mdadm
Here is alot of messing about with Ubuntu when it came to setting up raid and for only 2 HDDz you wont really see the difference. i hope in the next ubuntu version they make it as simple as some of the other distros.
A name like "Raiding Rabbit" will be cool
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
AmJaD
for only 2 HDDz you wont really see the difference. i hope in the next ubuntu version they make it as simple as some of the other distros.
Which distros have simple/automatic support for RAID?
Also, in the process of configuring/testing that my RAID1 would boot degraded (by simply unplugging one drive) I found that the array booted-up and loaded OS, apps, etc about 25% faster than 1 drive alone. Not sure if that indicates an actual speed advantage to RAID1 (though it's supposed to intelligently distribute read requests between both drives) or if the degraded array simply ran slower than a 'normal' 1-drive system.
-
Re: random RAID mdadm daemon checks?
Quote:
Originally Posted by
bsmith1051
I've never seen my array do that. Are you sure there wasn't a hw prob with one of the drives?
I doubt that, for several reasons:
1) When it does a rebuild, it looks like
[>....................] recovery = 0.1%
in my case, it was
[>....................] check = 0.1%
2) if there's a hardware problem, it will not try to check or even rebuild, it would simply mark disk as failed [U_].
3) I saw this thread http://lists.clug.org.za/pipermail/c...er/024059.html :
> Yes, current software RAID has an option to check the arrays periodically.
> Apparently you have this enabled. I'm not sure that checking involves
> re-syncing though. Perhaps there was some sort of problem found? If so it
> should be in your logs.
In my logs:
Aug 2 20:24:39 db2 -- MARK --
Aug 2 20:44:39 db2 -- MARK --
Aug 2 21:04:39 db2 -- MARK --
Aug 2 21:24:39 db2 -- MARK --
Aug 2 21:44:39 db2 -- MARK --
Aug 2 22:04:39 db2 -- MARK --
Aug 2 22:24:39 db2 -- MARK --
Aug 2 22:44:39 db2 -- MARK --
Aug 2 23:04:39 db2 -- MARK --
Aug 2 23:24:39 db2 -- MARK --
Aug 2 23:44:39 db2 -- MARK --
Aug 3 00:04:39 db2 -- MARK --
Aug 3 00:24:39 db2 -- MARK --
Aug 3 00:44:39 db2 -- MARK --
Aug 3 01:04:39 db2 -- MARK --
Aug 3 01:06:01 db2 kernel: [187105.414819] md: data-check of RAID array md0
Aug 3 01:06:01 db2 kernel: [187105.414824] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Aug 3 01:06:01 db2 kernel: [187105.414826] md: using maximum available idle IO bandwidth (but not more than 200000 K
B/sec) for data-check.
No I/O error prior to this "data-check", so it seems quite sudden.
I wonder if this check is an addition to the newest version of mdadm (I am using Ubuntu 8.04 server with mdadm - v2.6.3 - 20th August 2007
I cannot find more information on this however.
-
[SOLVED] random RAID mdadm daemon checks?
Quote:
Originally Posted by
alecm3
I doubt that, for several reasons:
1) When it does a rebuild, it looks like
[>....................]
recovery = 0.1%
in my case, it was
[>....................]
check = 0.1%
2) if there's a hardware problem, it will not try to check or even rebuild, it would simply mark disk as failed [U_].
3) I saw this thread
http://lists.clug.org.za/pipermail/c...er/024059.html :
> Yes, current software RAID has an option to
check the arrays periodically.
> Apparently you have this enabled. I'm not sure that checking involves
> re-syncing though. Perhaps there was some sort of problem found? If so it
> should be in your logs.
In my logs:
Aug 2 20:24:39 db2 -- MARK --
Aug 2 20:44:39 db2 -- MARK --
Aug 2 21:04:39 db2 -- MARK --
Aug 2 21:24:39 db2 -- MARK --
Aug 2 21:44:39 db2 -- MARK --
Aug 2 22:04:39 db2 -- MARK --
Aug 2 22:24:39 db2 -- MARK --
Aug 2 22:44:39 db2 -- MARK --
Aug 2 23:04:39 db2 -- MARK --
Aug 2 23:24:39 db2 -- MARK --
Aug 2 23:44:39 db2 -- MARK --
Aug 3 00:04:39 db2 -- MARK --
Aug 3 00:24:39 db2 -- MARK --
Aug 3 00:44:39 db2 -- MARK --
Aug 3 01:04:39 db2 -- MARK --
Aug 3 01:06:01 db2 kernel: [187105.414819] md: data-check of RAID array md0
Aug 3 01:06:01 db2 kernel: [187105.414824] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Aug 3 01:06:01 db2 kernel: [187105.414826] md: using maximum available idle IO bandwidth (but not more than 200000 K
B/sec) for data-check.
No I/O error prior to this "data-check", so it seems quite sudden.
I wonder if this check is an addition to the newest version of mdadm (I am using Ubuntu 8.04 server with mdadm - v2.6.3 - 20th August 2007
I cannot find more information on this however.
This problem is listed as Ubuntu bug:
https://bugs.launchpad.net/ubuntu/+s...ux/+bug/212684
There is a cron script
/etc/cron.d/mdadm
which runs /usr/share/mdadm/checkarray on the first Sunday of each month (which happened yesterday).
root@db2:/etc/cron.d# more mdadm
#
# cron.d/mdadm -- schedules periodic redundancy checks of MD devices
#
# Copyright © martin f. krafft <madduck@madduck.net>
# distributed under the terms of the Artistic Licence 2.0
#
# $Id$
#
# By default, run at 01:06 on every Sunday, but do nothing unless the day of
# the month is less than or equal to 7. Thus, only run on the first Sunday of
# each month. crontab(5) sucks, unfortunately, in this regard; therefore this
# hack (see #380425).
6 1 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --a
ll --quiet
Apparently, this cron killed the systems for quite a lot of people:
http://ubuntuforums.org/showthread.php?t=748418 etc
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
bsmith1051
Not sure if that indicates an actual speed advantage to RAID1 (though it's supposed to intelligently distribute read requests between both drives) or if the degraded array simply ran slower than a 'normal' 1-drive system.
Reads on RAID 1 with N disks are N times faster than on a single disk. Writes are the same speed or slightly slower.
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
bsmith1051
Which distros have simple/automatic support for RAID?
Well there is fedora. that was painless when it came to setting up raids but for some reason i just didnt like it.
also with raid1... thats more mirroring (had drive back-up). i dont really see the point on linux based desktops. (i mean its not like your going to get attacked by a win32 worm is it).
Raid0 is more performace
I was running vista with x2 hdds but i really noticed a difference when i had 4 hdds together (ram bus speed playes a big part as well).
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
bsmith1051
Which distros have simple/automatic support for RAID?
:-k :-k
I think simple & automatic are not necessarily RAID and nix approaches to anything - dont get me wrong I am not being critical of your comments or of the linux world - just a general approach of "rigid, manual and locked down" in the nix world versus automatic everything in the ms world.
just my observations. :D
regards
-
Re: random RAID mdadm daemon checks?
Quote:
Originally Posted by
alecm3
.... Ubuntu, and I am configuring my first production MySQL server with 8.04. ....
Dear alecm3
Are you using Ubuntu Server? I know the respective kernels of desktop and server are different with server being more tweaked to perform server functionality as opposed to desktop.
Regards
-
Re: random RAID mdadm daemon checks?
Quote:
Originally Posted by
crtlbreak
Dear alecm3
Are you using Ubuntu Server? I know the respective kernels of desktop and server are different with server being more tweaked to perform server functionality as opposed to desktop.
Regards
Yes, I am using 8.04.1 hardy heron server...
-
Re: HOWTO: Linux Software Raid using mdadm
This is doing my head in. I created my array and can see it in mdstat but I can't get any partitions created on it. When I fdisk /dev/md0, create the partition and write I get the following -
Code:
Disk /dev/md0: 1000.2 GB, 1000210300928 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00048f36
Device Boot Start End Blocks Id System
/dev/md0p1 1 121601 976760001 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
Rebooting has no effect. The partition does not exist except inside fdisk! I checked /dev/disk/by-id/ and only the device itself (plus all the physical disks) was there. Opening fdisk again and the partition looks fine but exit and no sign.
/proc/mdstat
Code:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active linear sda1[0] sdb1[1]
976767872 blocks 64k rounding
unused devices: <none>
/etc/mdadm/mdadm.conf
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This file was auto-generated on Sat, 16 Aug 2008 14:15:06 +0800
# by mkconf $Id$
fdisk -l
Code:
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d73fb
Device Boot Start End Blocks Id System
/dev/sda1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d73fb
Device Boot Start End Blocks Id System
/dev/sdb1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdc: 40.0 GB, 40020664320 bytes
255 heads, 63 sectors/track, 4865 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x23eb23ea
Device Boot Start End Blocks Id System
/dev/sdc1 * 1 4678 37576003+ 83 Linux
/dev/sdc2 4679 4865 1502077+ 5 Extended
/dev/sdc5 4679 4865 1502046 82 Linux swap / Solaris
Disk /dev/sdd: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0c32b920
Device Boot Start End Blocks Id System
/dev/sdd1 1 38913 312568641 83 Linux
Disk /dev/md0: 1000.2 GB, 1000210300928 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00048f36
Device Boot Start End Blocks Id System
/dev/md0p1 1 121601 976760001 83 Linux
Disk /dev/sde: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x44fdfe06
Device Boot Start End Blocks Id System
/dev/sde1 1 38913 312568641 83 Linux
Disk /dev/sdf: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x41ffc810
Device Boot Start End Blocks Id System
/dev/sdf1 1 38913 312568641 83 Linux
Disk /dev/sdg: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x44fdfe06
Device Boot Start End Blocks Id System
/dev/sdg1 1 38913 312568641 83 Linux
Help!
Allan
-
Re: HOWTO: Linux Software Raid using mdadm
Never mind - I'm an idiot. I was trying to partition the raid device rather than just formatting it.
-
Re: HOWTO: Linux Software Raid using mdadm
I followed the howto and set up 2x500 gb raid0, for storage.
Now i'm moving what i want to keep there, then i plan to format my system disc and reinstall.
But the thought just hit me, how do i do to add the raid array on the new system? is et as simple as i hope it is?
-
Re: HOWTO: Linux Software Raid using mdadm
You would need to obatin the alternate install CD I not he liveCD version. During the install process at disk config you will need to choose "custom" rather than just the defaults.You will then have the option to customise LVM, set up raid 0, 1, 5 etc.
Then the rest you should be quite used to.
-
Re: HOWTO: Linux Software Raid using mdadm
Oh, that's too bad, since i'm in my new system now.
And oh, i didn't use the actual howto o page one, but one linked to in the post, the one at http://www.linuxhomenetworking.com/w..._Software_RAID
Allthough i don't know the difference.
How do i go about to add the discs to the system?
Hopefully without destroying what's on them..
Edit:
Is it a bad idea to just recreate the raid array with
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
Create the mdadm.conf Configuration File and
create a mounting point for it?
For me to acces the data written on both discs (stripe)
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
crtlbreak
You would need to obatin the alternate install CD I not he liveCD version. During the install process at disk config you will need to choose "custom" rather than just the defaults.You will then have the option to customise LVM, set up raid 0, 1, 5 etc.
Then the rest you should be quite used to.
Well, yeah, but there's not much of "etc." there. E.g. level 10 is not available, so one has to do it "by hand."
-
Re: HOWTO: Linux Software Raid using mdadm
Problems solved, everyhnig is now fine.
The command i was looknig for was
sudo mdadm -A /dev/md0 /dev/sdb1 /dev/sdc1
-
Re: random RAID mdadm daemon checks?
I'm trying install Ubuntu-8.04 desktop following the instructions here: http://how2forge.org/install-ubuntu-...ftware-raid-10. Everything seems to be fine, but soon after I leave partitioner and go into "Installing system" window I get a error pop-up that says "The installer needs to remove operating system files from the install target, but was unable to do so. The install cannot continue."
Does anyone have any clue as to what might be happening?
Edit: I've done some poking around and it seems that the problem starts even earlier -- the partitioner does not see /dev/sdi, /dev/sdj, or /dev/sdk (that I have my /dev/md1, /dev/md2, and /dev/md3 aliased to). The only way I'm able to make partitioner see the raid partitions is to have them aliased (using -f option) to /dev/sd[e-g], which are SD, CF and MMS cardreaders on my system. SOS!!!
-
Re: HOWTO: Linux Software Raid using mdadm
I think that page actually has the answer for you
" ... Then create the file system on the RAID array. Format it now because the partitioner in the installer doesn't know how to modify or format RAID arrays. I used XFS file system, because XFS has great large file performance. Then you will create an alias for the RAID array with the link command because the Ubuntu installer won't find devices starting with "md"."
Does that seem logical?
Just to be sure - it is RAID 10 you are requiring with 4 seperate devices?
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
crtlbreak
I think that page actually has the answer for you
" ... Then create the file system on the RAID array. Format it now because the partitioner in the installer doesn't know how to modify or format RAID arrays. I used XFS file system, because XFS has great large file performance. Then you will create an alias for the RAID array with the link command because the Ubuntu installer won't find devices starting with "md"."
Does that seem logical?
Just to be sure - it is RAID 10 you are requiring with 4 seperate devices?
I've done *all* those steps and it did not work ... and yes, it's raid 10 with 4 identical drives. It looks like this is a known problem with a workaround (and, perhaps, a solution in future releases). One can either use a workaround (which I did not try) or alternate install CD.
Back to my story -- I thought I was stuck between a rock and a hard place, b/c alternate install CD was not detecting my keyboard (and live CD was not detecting my raid) ... eventually I got this situation resolved by making alternate CD recognize my keyboard correctly. Everything went smoothly from then on.
-
Re: HOWTO: Linux Software Raid using mdadm
Gents,
Here is an excellent manual on setting up a raid array:
http://www.linuxconfig.org/Linux_Software_Raid_1_Setup
Here it states that you can't boot if your / or /boot is raid 5.
Secondly, nobody seems to know how to kill a md array once created. The trouble is your raid drives have information about the raid array on them, so to keep raid arrays from 'magically (re udev)' showing up, you need to temporarily rename the mdadm command. I renamed mine to mdadm.x
The situation is that my server root file system stated to fail out, and I was down to one disk, and for whatever reason, that one disk had ext3 errors on it, but the disk that was not showing up in the array was good, so I needed to e2fsck the bad array to fix the ext3 file system, and create a new temporary second drive until my new server was up (i.e. bailing wire & tape).
The plan is to create 2 arrays, copy data from one to the other (so I don't copy the ext3 FS error), then delete one array, change my new mirror drive to the correct array, and place both back on my server. md0 in this case is my old drive, md1 the new, and the array on my server I am trying to fix is md0 (which is why md0 is my old drive).
I created two separate arrays on my development box with the old drive and my new mirror, and created 2 different arrays with one missing:
sudo mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1
(I did that for md0 & 1), then created a new ext3 fs on my new mirror, then mounted & copied the data from md1 to md0. But now I want to kill md1, and all attempts to do that failed...until I renamed mdadm to mdadm.x.
So go through the steps to fail your drives out:
mdadm.x /dev/md1 -f /dev/sdb1
mdadm.x /dev/md1 -r /dev/sdb1
Then stop the array:
mdadm.x -S /dev/md1
Then remove the drives from the system (I used USB-ide adapter...much easier). Then add back the new drive, which has become /dev/sdb1, and re-create the md0 array using /dev/sdb1:
mdadm.x --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1
Then fail the drive & move it to the server.
A further note, for those of you who want some of the structure of the software raid: The hard drive you can think of as a container. In the container you have to make partitions (not always, but for the most part you do) These partitions are also containers. Then normally you put a file system in these partition containers, but software raid adds one more level. You add a software raid container to the partition container, then finally the file system, which is another type of container...it holds information on the data stored on the drive. Not sure if that really helped or not....
-
Re: HOWTO: Linux Software Raid using mdadm
Question for all you RAID knowledgeable peeps out there. I just finished creating my first RAID5 array out of 3 x 1TB WD Caviar drives. The problem was that I had a lot of data that I could not get rid of before creating the array. So what I did is that on each drive I created two partitions, say sda1 (100G) and sda2 (900G). I spread my existing data over the 3x100G partitions and was left with 3x900G blank partitions.
I created a new RAID5 array on the 3x900G partitions and put an lvm logical volume on top of it running ext3.
I then transferred all the data from the 3x100G partitions into the RAID array.
So far so good.
Now what I want to do is expand my RAID array to include the 3x100G paritions which are now empty.
Can that be done easily? Is there a way to EXPAND the underlying partitions that make up the raid array (i.e. make each 900G partition, 1TB)? or do I just add the 3x100G partitions as separate drives to the existing array (so that the array will have 6 partitions underneath it)?
Thanks for your help,
Shifty
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
ShiftyPowers
Question for all you RAID knowledgeable peeps out there. I just finished creating my first RAID5 array out of 3 x 1TB WD Caviar drives. The problem was that I had a lot of data that I could not get rid of before creating the array. So what I did is that on each drive I created two partitions, say sda1 (100G) and sda2 (900G). I spread my existing data over the 3x100G partitions and was left with 3x900G blank partitions.
I created a new RAID5 array on the 3x900G partitions and put an lvm logical volume on top of it running ext3.
I then transferred all the data from the 3x100G partitions into the RAID array.
So far so good.
Now what I want to do is expand my RAID array to include the 3x100G paritions which are now empty.
Can that be done easily? Is there a way to EXPAND the underlying partitions that make up the raid array (i.e. make each 900G partition, 1TB)? or do I just add the 3x100G partitions as separate drives to the existing array (so that the array will have 6 partitions underneath it)?
Thanks for your help,
Shifty
I think you can expand partions at the end. As in:
Code:
[900GB(preserved)][100GB(destroyed)] -> [1000GB]
You wouldn't want this:
Code:
[100GB(preserved)][900GB(destroyed)] -> [1000GB]
I know you won't want to add the 100 GB partitions to the existing array. The size of the array is determined by the smallest device.
Since you wisely used LVM, why not make a new RAID5 array, make it a physical volume under LVM, add it to your existing logical volume, and expand the filesystem.
-
Re: HOWTO: Linux Software Raid using mdadm
I was having problems creating my array. I would get this error message:
Code:
user@machine:~$ sudo mdadm --create /dev/md0 --level=1 /dev/sdb1 /dev/sdc1
mdadm: error opening /dev/md0: No such device or address
But the /dev/md0 node was getting created. Turns out my problem was the "md" kernel module was not installed. I ran this:
And that fixed my issue.
I hope this helps someone.
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
therufus
I have gutsy server version and can't get software RAID to work. mdadm isn't even installed. If I try to apt-get it, I get:
root@server:/usr/bin# apt-get install mdadm
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package mdadm is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package mdadm has no installation candidate
apt-get update works fine when I set everything to New Zealand servers (the Australian repository is always down - thanks for that Optus). I tried manually downloading mdadm and doing an install but make isn't even installed on gutsy. Installing make doesn't work either because of other dependencies, and further down the rabbit hole I go.
What am I doing wrong?
Is this still true? That I have to point to new zealand mirrors to get mdadm? How do I change my mirror to point to new zealand?
Thanks
-
Re: HOWTO: Linux Software Raid using mdadm
Nope, no need to point to New Zealand. All you need to do is to install using *Alternate* CD (text-based install).
-
Re: HOWTO: Linux Software Raid using mdadm
Can anyone tell me if this guide still is valid for Ubuntu 8.04 ?
Thanks.
-
Re: HOWTO: Linux Software Raid using mdadm
Hi Melsen
yes - this guide is valid for 8.04 Hardy.
I have a RAID1 setup (redundancy purposes) which is well managed by mdadm.
Regards
-
Re: HOWTO: Linux Software Raid using mdadm
I am setting up a raid-6 array today, and in doing so I realized that when I was brand new to Ubuntu and switched over a windows software raid server to ubuntu and used mdadm I didn't specify partitions, I created the array with the entire disks... it worked... and has been working since... but I noticed that nobody is doing it that way in any of the tutorials I've been browsing through. Anyone know why? are there any advantages/disadvantages to doing it either way?
what I'm talking about is when you do "mdadm --create /dev/md0" and specify the drives to create the array with, all the tutorials are showing to use /dev/sda1 /dev/sdb1 /dev/sdc1, ect... they're creating the array using a linux raid autodetect partition from each drive... the way I did it when I didn't know any better was to do not specify a partition but the whole disk, like /dev/sda, /dev/sdb, /dev/sdc... either way I want to use the whole disk for the raid array... so since I want to use the entire disk is there an advantage to go through the trouble of creating partitions on each disk and then creating the raid array using those partitions instead of the disk itself? It seems to have worked okay on that server I did it to.
-
Re: HOWTO: Linux Software Raid using mdadm
Kissell, one advantage I can think of is that I can make my /boot partition RAID1 (so that if a disk or two fail I can still start the system). Then for / partition I can use RAID0 for speed and for /home I can use RAID10 for redundancy & speed ... that's not quite what I do, but the idea is to have different RAID levels for different partitions.
Also, with RAIDing entire disks (as far as I understand) you're limited to RAID1 (mirroring) unless you're using a controller of some kind ... but, I gather, most folks here are into software RAIDs ...
-
Re: HOWTO: Linux Software Raid using mdadm
That's a good point about the boot partition, I've never done that, and I am out of hard disk slots in this particular box, and the root file system drive/partition is non-redundant in any way (been a concern of mine)... so I could replace it with another drive (if i'd use partitions in raid creation) to both expand my raid size and add redundancy to my root file system. Thanks
Oh, but I did create a RAID5/6 with the whole drive without partitions... It's been running all year... the only "problem" i've seen, is that Ubuntu creates/mounts a "Software RAID Drive" inside "Computer" in the GUI, and it's unaccessible... I can still mount the volume and use it, but it also creates this unusable "Software RAID Drive" on it's own, which is confusing to people, because you can't not have it, and if you click on it, it doesn't open the RAID volume. I'm 99% sure it's doing this because I created the RAID with the whole drive instead of partitions. I've reinstalled the OS serveral times (different versions) and it keep doing it, even rebuilt the array (using drive not partitions) and it still does it... but I've created another server as a backup to this server and used partitions when creating that array and this problem didn't happen when I used the partitions to create the array... not a big deal at all...
Anyway, I was hoping maybe there was a more technical reason... the server that I created the array without using partitions, it has some issues, sometimes it kicks a drive out of the array for apparantly no reason, rebuilds itself, and the drives test out fine... so I was hoping maybe this wasn't a problem with hardware and was a potentially a problem with how I created the raid? probably not huh? but I'm grasping at solutions on that one.
-
Re: HOWTO: Linux Software Raid using mdadm
Can i change an active RAID 10 array to a RAID 5 without losting data??
This is my mdstat
Code:
md3 : active raid10 sda7[0] sdd7[3] sdc7[2] sdb7[1]
1410089088 blocks 64K chunks 2 near-copies [4/4] [UUUU]
[===============>.....] resync = 76.0% (1071866432/1410089088) finish=49.4min speed=113904K/sec
md2 : active raid5 sda6[0] sdd6[3] sdc6[2] sdb6[1]
73231872 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
682624 blocks [4/4] [UUUU]
unused devices: <none>
massabuntu@massabuntu-desktop:~$
I want to change the level of /dev/md3 from raid10 to raid5, it's possible??
Thanks.
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
impossibilechecisiaquesto
Can i change an active RAID 10 array to a RAID 5 without losting data??
No... the way the data is stored is completely different in RAID 5 vs. 10. You would have to back everything up, rebuild the array as RAID 5, and then restore. Or build a new RAID 5 array out of another set of drives and then copy everything over.
RAID 10 is RAID 0 and 1 combined... 2 redundant pairs of striped drives. RAID 5 is striped w/parity.
-
Re: HOWTO: Linux Software Raid using mdadm
Ya, i know :(
was only hoping in the magic of the penguin.
So, i move the data, recreate the raid array, and return the data?
I have to format the current partitions or i can just build a different raid level?
-
Re: HOWTO: Linux Software Raid using mdadm
Can I also use mdadm to create a raid0 array for use with windows vista? (dual boot)
-
Re: HOWTO: Linux Software Raid using mdadm
I've never heard of a way you could use mdadm to create an array that would be usable in windows. I'd also like this, not necessary, but would be nice to have that option. It'd also be nice to be able to use microsoft created software raid arrays inside linux, without having to destroy the data and recreate them.
Unfortunately, the Microsoft OS has its own way of doing software raid, so it is not compatible with mdadm. Although, theoretically I don't see any reason an app couldn't be made for microsoft OS that would let you use mdadm raid'd disks inside of windows. Don't wait around for me to do it, cause I don't need it, more of a novelty for me, but it should be theoretically possible to do.
-
How to migrate a root filesystem into raid 1?
I've acquired a new hdd and decided to make let my Ubuntu server be more robust by using raid 1 on root filesystem (together with /boot). No, I don't want to install the system from scratch - I just need to _migrate_ an existing one into RAID 1. Preferably I would not like to use the LiveCD - I simply don't have CD drive on the server. I can use minicd from the local net PXE boot, if that is necessary.
I use ubuntu 8.04 with openvz kernel.
My root partition is small (20 GB) and the empty extra hard drive is much bigger. The root partition doesn't use LVM, and I believe it shouldn't, since I had problems in using LVM on top of Raid 1 on /boot and /.
I have physical access to the host, but very uncomfortable one. (The hard drive is already installed.)
What should I do, to migrate the filesystem into RAID?
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
mgrusin
Great howto! Please consider adding this to the community documentaion wiki. There is currently nothing in there (at least that I could find) on the use of mdadm.
-MG
Someone has added what looks like a though howto for raid installation using server edition.
https://help.ubuntu.com/9.04/serverguide/C/advanced-installation.html
-
Re: HOWTO: Linux Software Raid using mdadm
This may seem like a dumb question, but while using MDADM, I'd like to make sure that I can mirror a current 160G sata drive to another 160G sata drive. Neither device is the boot filesystem. Both of them are currently formatted ext3. One has about 50G of stuff on it, the other is empty.
All I want to do is make a RAID1 array out of the two of them, but I don't want to bother copying this data somewhere else right now beforehand (unless I need to). So, if someone can confirm for me that the instructions listed on the first page (or here) for RAID1 will not damage the original data, that would be awesome. Thanks!
-
Re: HOWTO: Linux Software Raid using mdadm
Quote:
Originally Posted by
hashbrowns
This may seem like a dumb question, but while using MDADM, I'd like to make sure that I can mirror a current 160G sata drive to another 160G sata drive. Neither device is the boot filesystem. Both of them are currently formatted ext3. One has about 50G of stuff on it, the other is empty.
All I want to do is make a RAID1 array out of the two of them, but I don't want to bother copying this data somewhere else right now beforehand (unless I need to). So, if someone can confirm for me that the instructions listed on the first page (
or here) for RAID1 will not damage the original data, that would be awesome. Thanks!
The instructions your linked to are for raidtools, so they would not go well with "while using mdadm" part of your request. I would locate mdadm specific instructions on how to accomplish what you want to accomplish.
You would be much better off backing up your data first before creating a raid, even if it's possible to create raid leaving 50GB of data in place (which I doubt, as mdadm needs to put a superblock somewhere on the drive which existing partition(s) will likely not allow to .. but I'm no expert here)
-
Re: HOWTO: Linux Software Raid using mdadm
thanks for this great how to.. it was really helpful :)
-
1 Attachment(s)
Re: HOWTO: Linux Software Raid using mdadm
I've got 4 320GB Seagate 7200.11 drives and have the following RAIDs (using mdadm) across them:
Code:
/dev/md0 /boot RAID0
/dev/md1 / RAID10
/dev/md2 /home RAID10
I'm monitoring the status of my RAIDs via conky (screenshot below) ... I've noticed a strange thing that after about a day of running /dev/sdb fails for both of my RAID10 devices:
Code:
cyberkost@raidbox:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid10 sdb3[4](F) sdc3[3] sda3[1] sdd3[2]
552779776 blocks 256K chunks 2 far-copies [4/3] [_UUU]
md1 : active raid10 sda2[1] sdc2[3] sdb2[4](F) sdd2[2]
67103232 blocks 64K chunks 2 far-copies [4/3] [_UUU]
md0 : active raid1 sda1[1] sdc1[3] sdb1[0] sdd1[2]
1052160 blocks [4/4] [UUUU]
unused devices: <none>
This is also signalled by the HDD activity LED being constantly on (though no audible HDD activity). If I try to add /dev/sdb[2,3] partitions back to their respective arrays, I get:
Code:
cyberkost@raidbox:~$ sudo mdadm --add /dev/md1 /dev/sdb2
[sudo] password for kost:
mdadm: Cannot open /dev/sdb2: Device or resource busy
The HDD activity LED goes off after about a day or two and I can add the partition back after that. Alternatively, I can just reboot the machine and although it takes a while (2-3 hours) to come up, I can add the /dev/sdb[2,3] partitions back to their RAID devices when start-up is completed (the machine comes up with /dev/sdb[2,3] missing).
Originally I thought that the HDD that corresponds to /dev/sdb is going bad or the corresponding SATA cable/channel on the motherboar have become faulty, but from reading post1 and post2 it references I started to suspect that it's the upgrade to Jaunty 9.04 that messed my mdadm RAID up.
My suspicions are further deepened by the following facts:
1. sudo smartctl -a /dev/sdb days the drive is fine (and the output for /dev/sdb looks VERY similar to that for the other 3 drives)
2. I first noticed the problem shortly after the upgrade to 9.04, I have not had a problem for a year prior to that.
3. /proc/mdstat used to list /sdaX /sdbX /sdcX /sddX for all 3 RAID devices prior to upgrade (ABCD order). It now has ACBD for /dev/md[0,1] and BCAD for /dev/md2.
4. Looking at mdadm.conf I find UUID contsucts to look rather suspicious -- the last two blocks are the same for all /dev/md[1,2,3], while the first two are different:
Code:
cyberkost@raidbox:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=4 UUID=20eef32f:87dc5eba:e368bf24:bd0fce41
ARRAY /dev/md1 level=raid10 num-devices=4 UUID=bacf8d29:0b16d983:e368bf24:bd0fce41
ARRAY /dev/md2 level=raid10 num-devices=4 UUID=67665d86:1cc558d5:e368bf24:bd0fce41
# This file was auto-generated on Thu, 01 Jan 2009 00:37:24 +0000
# by mkconf $Id$
I'd naively expect those UUID fields to be the same across all 3 RAID devices (e.g., if each block is a refence to a particular physical HDD) or be completely different (e.g., if each block is a reference to a particular superblock/parition).
5. Lastly, I see that mdadm.conf has the date of my upgrade to Ubuntu 9.04 Jaunty Jackalope:
Code:
cyberkost@raidbox:~$ ls -la /etc/mdadm/mdadm.conf
-rw-r--r-- 1 root root 874 2009-04-25 23:09 /etc/mdadm/mdadm.conf
... and no, there's no backup file left :(
I checked if following the advice offered in post2 is going to help, but it seems that it will not ... b/c the command returns the configuration that's already part of my 9.04 mdadm.conf
Code:
cyberkost@raidbox:~$ sudo mdadm --examine --scan --config=mdadm.conf
[sudo] password for kost:
ARRAY /dev/md0 level=raid1 num-devices=4 UUID=20eef32f:87dc5eba:e368bf24:bd0fce41
ARRAY /dev/md1 level=raid10 num-devices=4 UUID=bacf8d29:0b16d983:e368bf24:bd0fce41
ARRAY /dev/md2 level=raid10 num-devices=4 UUID=67665d86:1cc558d5:e368bf24:bd0fce41
PLEASE HE-E-E-E-ELP!!!
-
Re: HOWTO: Linux Software Raid using mdadm
kragen: thanks for a nice guide
-
Re: HOWTO: Linux Software Raid using mdadm
Great guide, wish somebody would write a GUI frontend for mdadm. The only way that I've found to setup RAID with a GUI is with the Linux installer.