Page 10 of 13 FirstFirst ... 89101112 ... LastLast
Results 91 to 100 of 124

Thread: HOWTO: Linux Software Raid using mdadm

  1. #91
    Join Date
    Mar 2005
    Location
    Sweden, Uppsala
    Beans
    944
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: HOWTO: Linux Software Raid using mdadm

    Hey I have for now 2 questions.

    A) Under 5) It says:
    "The partitioner should be straightforward enough to use - when you create a partition which you intend to use in a raid, you need to change the type to "Linux RAID Autodetect"."

    I can't find it, It is the same as flag for RAID in Gparted?

    B) I have problems with swifting sda, sdb, sdc, sdd, sde. May I or do I need to assign my partition in mdadm with UUID or does mdadm recognise the disk in the array automatically?

    /Cheers
    /Azyx

    Ubuntu 16.04LTS 64bit, 16.04 Lubuntu 32-bit on eeePCs and OSX on a G4 800MHz iMac (iLamp). I think I have an W7 on one of my Virtualbox machine under 16.04LTS?

  2. #92
    Join Date
    Mar 2005
    Location
    Sweden, Uppsala
    Beans
    944
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: HOWTO: Linux Software Raid using mdadm

    Quote Originally Posted by AllGamer View Post
    you mean the partitions?
    like /dev/sdbp1?

    the entire drive is used for the raid.

    anyways i think i might have found what was causing the problem
    As I have understand it, mdadm use partions even if the hole disk are a partition, and not as preferred in ZFS, the disks.

    /Cheers
    /Azyx

    Ubuntu 16.04LTS 64bit, 16.04 Lubuntu 32-bit on eeePCs and OSX on a G4 800MHz iMac (iLamp). I think I have an W7 on one of my Virtualbox machine under 16.04LTS?

  3. #93
    Join Date
    Nov 2010
    Beans
    103

    Smile Re: HOWTO: Linux Software Raid using mdadm

    Quote Originally Posted by TheR View Post
    Try to wipe out beginning of disk with dd.

    by
    TheR
    update...

    (i totally forgot i posted a question here )

    So, yes after many trials & errors, i decided to wipe each driver again with dd and re-kill the head of each disk, that seems to have finally cleanup the mess the LVM left behind.

    i've got the RAID5 up and running + automatically mounting everytime it boots now without any error

    added the RAID5 volume UUID info into /etc/fstab so that it will automatically mount itself on boot



    On a site note:

    Now that I have both RAID5 via mdadm running vs. RAID5 hardware (cheap softRAID controllers) in the same machine, I can say with certainty that the softRAID hardware version of RAID5 has faster performance.

    The speed difference is only noticeable on sustained Massive data transfers... like when you transfer all the content from one RAID5 to another RAID5.

    Otherwise you'll not really be able to appreciate the difference if it's just your regular samba file sharing purpose or web server access.

    in my setup all 4 separate set of RAID5 were composed of 4 Seagate 1.5TB 7200rpm, so for a total of 16 HDD of the same brand & speed

    4 are attached to the motherboard (Intel ich9) running mdadm RAID5
    4 are attached to rr1740 using the manufacture drivers + RAID management software
    4 are attached to rr264x1 using the manufacture drivers + RAID management software
    4 are attached to rr264x4 using the manufacture drivers + RAID management software

    the ones that were on softRAID consistently completed massive data transfer (writes; commit to disk) faster than the mdadm RAID5 setup.

    which now have been re-assigned to mirror the rr1740 instead, as a backup for catastrophic failure.




    *** Off Topic ***

    the rest of the server spec:

    8GB DDR2 Corsair stock speed
    stock on-board intel video
    Intel Pentium D 820 (2.8 GHz Dual Core LGA775) stock speed
    on-board intel 100 Mbps NIC (as backup connection)
    2 x Intel PRO/1000 MT Dual Port Server Adapter (9k jumbo frame capable)

    that makes 4 separate 1 Gbps connections, i set it up as a 2x2 (9k jumbo frame enabled via Cisco Manage Switches), so basically both ports in each NIC works as 2000 Mbps (2 Gbps) + the 2nd NIC as a fail over/load balancing in case the anything happens to the other NIC

    of the 4 RAID setup, only 2 are accessible to the users, as the other 2 RAID are used to mirror the other 2 RAID

    in a way you can imagine a RAID1 mirror for RAID5 of this server

    and on a separate server there's yet another redundant RAID5 backup for this server's RAID5s data.

    The only thing that is non-redundant in this server is the PSU, which i had to replace just a few days ago already due to overload which was no surprise as it was originally a 600W PSU running all that stuff + all the cooling fans

    Now the new PSU is a 1200W, which gives ample room for expansion with aprox 350W free, vs 250W overboard on the old one.

    So, the old PSU was pretty decent to have been able to push 250W over its limit for 2 years straight 24/7

    the power failure only damaged one of the RAID5, which was easily fixed by running a verification scan with the HighPoint Management software.

    which incidentally the partial death of the 600W PSU was not its own fault, as the real problem was caused by a dying battery on the APC 1500VA that was running the whole thing.

    which according to APC diagnostic... it died prematurely due excessive heat, which i would not deny; that server room is like a dry sauna
    Last edited by AllGamer; April 25th, 2011 at 09:43 PM.

  4. #94
    Join Date
    Sep 2011
    Beans
    2

    Re: HOWTO: Linux Software Raid using mdadm

    nice) are there any other helps? A RAID-1/5 HOWTO for example)




    -------------shoutbox script

  5. #95
    Join Date
    Mar 2005
    Location
    Sweden, Uppsala
    Beans
    944
    Distro
    Ubuntu 12.04 Precise Pangolin

    Re: HOWTO: Linux Software Raid using mdadm

    Under point 7 it says:
    ......

    Code:
    mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1
    will create a raid0 array /dev/md0 formed from /dev/sda1 and /dev/sdb1 with chunk size 4.
    .........


    I have problems with changing /dev/sd[a,b]1. Is it possible to use UUID# instead of /dev/sd[a,b]1 partitions ?

    /Cheers

    PS. I have seing the man-page http://linuxmanpages.com/man8/mdadm.8.php ,but I have problem to understand where you for instance shall put -u --userid. I have problem to understand man-pages
    Last edited by Azyx; November 5th, 2011 at 12:13 PM. Reason: UID-->UUID
    /Azyx

    Ubuntu 16.04LTS 64bit, 16.04 Lubuntu 32-bit on eeePCs and OSX on a G4 800MHz iMac (iLamp). I think I have an W7 on one of my Virtualbox machine under 16.04LTS?

  6. #96
    Join Date
    Oct 2006
    Beans
    27

    Re: HOWTO: Linux Software Raid using mdadm

    Quote Originally Posted by dedede12 View Post
    nice) are there any other helps? A RAID-1/5 HOWTO for example)




    -------------shoutbox script
    A RAID5 or RAID6 HOWTO would be huge....

    Here's were I am tonight. I have two RAID arrays on a system. The first is RAID1 and supports /boot swap / and /home. The second is RAID6 and supports everything else. It currently has 4 3TB drives. I have 3 more sitting in the tower and want to add them to the RAID6 array.

    I was using Disk Utility to try to add these three drives but it doesn't seem to want to do it with the array up and running. When I take the array down I'm denied access to the array controls in "Edit Components" so I can't tell it to add drives.

    I remove the partitions from the 3 drives (initialized as GPT). Then I go into "Edit Components". I see the three drives. I am told they each have no partitions and each have 3TB available for use.

    I go to add one drive. I click the box to add it and the click "Expand". I authenticate then get this message:

    Error expanding RAID Array.
    An error occurred while performing on operation on "hitchcock RAID6" (6.0 TB RAID-6 Array): The operation failed

    Detail information says:

    Error expanding array: helper script exited with exit code 1:
    mdadm: added /dev/sdc1
    mdadm: Need to backup 6144K of critical section..

    I now see the new drive in the RAID6 array as a spare. Well...I now see all three spare drives that I want to activate.

    What am I missing here? If I can't figure this out I guess it will be a reboot into a run-level 1 root shell to use mdadm. Any pointers from those of you who are quite familiar with using Disk Utility and RAID arrays? I'm now at the bang-my-head-on-the-wall point.

  7. #97
    Join Date
    Oct 2006
    Beans
    27

    Re: HOWTO: Linux Software Raid using mdadm

    I'm going to answer my own query. I finally figured out what was going on. Digging further into the error message output I kept looking at the issue of bitmap removal. Why was this necessary? And removal from where? I had an epiphany 30 mins ago reading this mdadm man page:

    http://linux.die.net/man/8/mdadm

    Which then led me to search online and read this page which has some good information about speeding up working with Linux software raid:

    http://www.coderetard.com/2011/02/01...nd-re-syncing/

    What I wound up doing was removing the bitmap from the array with this shell command:

    sudo mdadm --grow --bitmap=none /dev/md127

    I was then able to expand the array using Disk Utility, though I could have done it from a terminal shell with this command:

    sudo mdadm --grow /dev/md127 --raid-devices=7

    Once I'm done with the reshaping I'll put the bitmap back after figuring out what it should optimally be. The RAID6 array is reshaping right now with the three new 3TB drives added. In a few hours I will seal it all back up. In the meantime I'm off to Fry's Electronics to return the extra pieces of hardware that I don't need.

    The final configuration of the media center server will be a 8gb DDR3 machine with a AMD Phenom II X4 quad-core processor. It will have a RAID1 mirror of 2*2TB drives for boot, swap, / and home. And 7*3TB drives for the RAID6 array. 4 will be on the mobo along with the boot mirror. The other 3 drives will be on a 4port SATA card. 6 drives are mounted in the 3.5" drive bays and the other 3 plus the PATA DVD burner are mounted in the 5.25" drive bays. 4 big fans, plus 1 giant fan and a 1200watt power-supply. Hopefully I'm now done tinkering with it and can get back to software and music.

  8. #98
    Join Date
    Sep 2012
    Beans
    8

    Re: HOWTO: Linux Software Raid using mdadm

    Hi,

    this is a great idea but unfortunately I am not able to get it to work.
    My idea is having one Linux System alongside one Windows 8 system. I don't want no dual boot. I just want to boot into one of the systems by choosing the boot priority in UEFI (bios).

    This is what I tried. I created partitions with a gparted in "the linuxmint-13-cinnamon-dvd-64bit.iso" live system. Which would look like in screens attached. Then I did the following (md0 for root and md1 for /home)
    Code:
    sudo mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda5 /dev/sdb5 
    mdadm: /dev/sda5 appears to contain an ext2fs file system
        size=4096000K  mtime=Thu Jan  1 00:00:00 1970
    mdadm: /dev/sdb5 appears to contain an ext2fs file system
        size=4096000K  mtime=Thu Jan  1 00:00:00 1970
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    and

    Code:
    sudo mdadm --create /dev/md1 --chunk=4 --level=0 --raid-devices=2 /dev/sda6 /dev/sdb6 
    mdadm: /dev/sda6 appears to contain an ext2fs file system
        size=10240000K  mtime=Thu Jan  1 00:00:00 1970
    mdadm: /dev/sdb6 appears to contain an ext2fs file system
        size=10240000K  mtime=Thu Jan  1 00:00:00 1970
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md1 started.
    After that I started "Install Linux Mint" to install the system. But first I did
    Code:
    sudo mkfs.ext4 /dev/md0
    Code:
    sudo mkfs.ext4 /dev/md1
    Code:
    sudo mkfs.ext4 /dev/sda1
    I chose manual install and changed md0, md1 and sda1. md0 and md1 to format as ext4 again with mount points for root and /home. sda1 I formatted with ext2 and mount point /boot. Which looked kinda like the screen attached.







    When I do it like this I get

    Code:
    Executing 'grub-install /dev/sda1' failed.
    
    This is a fatal error
    Could you guys please help me out ?
    Last edited by mintfrish; September 25th, 2012 at 10:31 AM.

  9. #99
    Join Date
    Aug 2009
    Beans
    Hidden!
    Distro
    Xubuntu

    Re: HOWTO: Linux Software Raid using mdadm

    Quote Originally Posted by mintfrish View Post

    Code:
    Executing 'grub-install /dev/sda1' failed.
    
    This is a fatal error
    Could you guys please help me out ?
    Not sure about the rest of the problem but, sudo grub-install /dev/sda would be the correct command not sda1.

  10. #100
    Join Date
    Sep 2012
    Beans
    8

    Re: HOWTO: Linux Software Raid using mdadm

    Okay please step by step. I simplified the partitions a little bit so that I would have only one mdadm RAID /dev/md0 for root without a separate /home partition.

    After manipulating with gparted fdisk -l gives me this now>

    Code:
    mint@mint ~ $ sudo fdisk -l
    
    Disk /dev/sda: 16.0 GB, 16001269760 bytes
    255 heads, 63 sectors/track, 1945 cylinders, total 31252480 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00017dbe
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048      309247      153600   83  Linux
    /dev/sda2          309248    22247423    10969088   83  Linux
    /dev/sda3        22247424    31252479     4502528    5  Extended
    /dev/sda5        22249472    31150079     4450304   82  Linux swap / Solaris
    
    Disk /dev/sdb: 16.0 GB, 16001269760 bytes
    255 heads, 63 sectors/track, 1945 cylinders, total 31252480 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000d958a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048    21940223    10969088   83  Linux
    /dev/sdb2        21940224    31252479     4656128    5  Extended
    /dev/sdb5        21942272    30842879     4450304   82  Linux swap / Solaris
    
    Disk /dev/sdc: 31.6 GB, 31641829376 bytes
    255 heads, 63 sectors/track, 3846 cylinders, total 61800448 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xc3072e18
    So my next step in this case would be:

    Code:
    mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda2 /dev/sdb1
    And right after I should be able to run the installation? I would choose manual installation and put md0 as root mounting point. I leave everything else alone except that i choose /dev/sda as 'device for boot loader' installation and that's it ?

    ..continued:

    Code:
    mdadm: /dev/sda2 appears to contain an ext2fs file system
        size=10969088K  mtime=Thu Jan  1 00:00:00 1970
    mdadm: /dev/sdb1 appears to contain an ext2fs file system
        size=10969088K  mtime=Thu Jan  1 00:00:00 1970
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    Code:
    sudo fdisk -l
    
    Disk /dev/sda: 16.0 GB, 16001269760 bytes
    255 heads, 63 sectors/track, 1945 cylinders, total 31252480 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00017dbe
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048      309247      153600   83  Linux
    /dev/sda2          309248    22247423    10969088   83  Linux
    /dev/sda3        22247424    31252479     4502528    5  Extended
    /dev/sda5        22249472    31150079     4450304   82  Linux swap / Solaris
    
    Disk /dev/sdb: 16.0 GB, 16001269760 bytes
    255 heads, 63 sectors/track, 1945 cylinders, total 31252480 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000d958a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048    21940223    10969088   83  Linux
    /dev/sdb2        21940224    31252479     4656128    5  Extended
    /dev/sdb5        21942272    30842879     4450304   82  Linux swap / Solaris
    
    Disk /dev/sdc: 31.6 GB, 31641829376 bytes
    255 heads, 63 sectors/track, 3846 cylinders, total 61800448 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xc3072e18
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1   *        2048    61800447    30899200    c  W95 FAT32 (LBA)
    
    Disk /dev/md0: 22.5 GB, 22464675840 bytes
    2 heads, 4 sectors/track, 5484540 cylinders, total 43876320 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 4096 bytes / 8192 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    Last edited by mintfrish; September 25th, 2012 at 07:05 PM.

Page 10 of 13 FirstFirst ... 89101112 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •