Page 2 of 6 FirstFirst 1234 ... LastLast
Results 11 to 20 of 60

Thread: Ubuntu 12.04 won't see raid 0

  1. #11
    Join Date
    Jan 2013
    Beans
    29

    Re: Ubuntu 12.04 won't see raid 0

    allso in regards to buying a dedicated controller for raid... do i have to get one that is certified for ubuntu? i don't think so since the raid controlle creates the raid at a lower layer then the one the ubuntu system is running on and so ubuntu will just be presentet with the raid as one disc...

  2. #12
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu 12.04 won't see raid 0

    Quote Originally Posted by offerlam View Post
    It fails in the sense that i don't know what i'm doing... can you give me clear good guide to setup a raid 5 using software raid and 4 disc in a single patition which i then can install ubuntu server on...

    I have created raids and all that but im sure how to partition them. Actually i was expecting that you had to make your raid and then install ubuntu on it and that was it..

    I know the subject says raid 0 but that was with the HP controller which can't provide raid 5 - with the software raid i can..
    Give me a few hours max, and I will post something to help you get going.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  3. #13
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu 12.04 won't see raid 0

    OK, lets start. First of all, this is only a discussion and you can ask further questions if you don't understand something.

    One of the main things you need to consider is that SW raid doesn't work like HW raid or fakeraid. With HW raid and fakeraid you create one or more raid arrays and then partition them further. With SW raid the arrays are assembled from partitions by the OS itself, so you need to have physical partitions for every partition you plan to have, it has to be separate arrayd device (md device). That might require a more detailed advanced planning on your side.

    Another interesting option to use is LVM, Logical Volume Manager. It allows to have one or more physical devices as a base for the Volume Group and the Logical Volumes you create. The LVs are like partitions and you can expand and shrink them dynamically without formatting. Expanding is completely online, while you have to do shrinking offline but you still don't format the LV (and don't lose any data). It has to be offline to shrink the filesystem, only because of that.

    I don't know how you feel about LVM but many people use it on servers especially if you want to split the system on many partitions and can't decide the size of each one at the time of installation. You can create them smaller and grow them online later as you need.

    If you want to go with plain simple mdadm and one big / partition with everything on it, I would recommend at least having a small 1GiB /boot partition to make sure it's on the beginning of the disks. Also, I'm not sure /boot can be on raid5, so it's best to have a small raid1 device for it.

    This simple setup would be:
    1. For 3TB disks you have to use gpt table because msdos supports up to 2.2TB.
    2. On gpt disks you need a small bios_grub partition so that grub2 can install correctly.
    3. All partitions (mount points) need a separate physical partition on the disks.

    I would start with the ubuntu desktop live cd and prepare the partition from live mode in advance. It has GUI tools but I prefer using parted in terminal because it gives you option to work with the unit you want.

    Lets assume your four disks will be /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd. For all of them, using terminal and parted, prepare the partitions first:
    Code:
    sudo parted /dev/sda
    mklabel gpt #(new gpt table)
    unit MiB #(change unit to MiB)
    print #(to check the total MiB size of the disk, look below for the calculation)
    mkpart GRUB 1 2
    set 1 bios_grub on
    mkpart BOOT 2 1026
    set 2 raid on
    mkpart ROOT 1026 X
    set 3 raid on
    mkpart SWAP X X+2048
    When you are creating the ROOT and SWAP partitions, the X value you will need to calculate in advance depending on the total MiB size of the disk. It's best to leave approx 20MiB at the end, just in case some of the disks are not identical in MiBs. So, the X value would be approx (total MiB of /dev/sda)-2068.

    I used 2068MiB to leave 2048MiB (2GiB) for swap at the end of each disk and approx 20MiB unused.

    Note that you calculate X only for sda. On the other three disks you use exactly the same partition start/end points in mkpart, you don't calculate it any more. This is so that the partitions are identical on the disks, the start/end points match.

    After you do the above on all four disks, you have the partitions ready. Boot the machine with the server CD and when you reach the partitioning step select Manual. When the list with partitions opens it will already show the created partitions. You can go directly to the Configure Software RAID option at top, and create the md devices.

    The first device will be md0, level raid1, number of devices 4. from the menu that shows, select sda2, sdb2, sdc2 and sdd2. In text menus you select with Space bar, not Enter. After you have the four partitions selected as members, select OK.
    Then make the second device, md1, level raid5, number of devices 4, members sda3, sdb3, sdc3 and sdd3. That's it, exit the SW raid configuration.

    Now in the partitions list you will see your new raid devices (usually on top). Select the md0 1GiB device you made for /boot, in the Use As select ext4, mount point /boot, finish.
    Then select the md1 device, again Use As ext4, mount point / and finish.
    Select one by one sda4, sdb4, sdc4 and sdd4 and set them Use As to swap area. That will use all four partitions as swap, no need to raid them.

    That's it.
    Important NOTE: Do not select or format the first partition on any of the disks. The partition needs to be RAW, without any filesystem, that's how grub2 uses it.

    When the end of the install is reached you should see a message that grub-install will install grub2 to all disks. That is usually done automatically so that the server can boot regardless which disk fails. If you have grub2 on only one disk, it can't boot if that disk fails even though raid5 can work with single disk failed.

    That should get you going. If you want to implement LVM, the above doesn't apply, you need slightly different partitioning of the disks and different options during the install.

    This is just a guidance, you can decide to have small / partition too, and then have large /home and /var partitions which would mean more physical partition (remember, each for every md device that you want to be separate mount point).

    When reading all this it might look complicated, but when you have the process on a monitor in front of you it's actually rather intuitive. The main thing is to plan the md devices (separate mount points) in advance, and whether or not you will use LVM because it changes things.

    Any questions, fire off...
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  4. #14
    Join Date
    Jan 2013
    Beans
    29

    Re: Ubuntu 12.04 won't see raid 0

    Hey Darkod,

    Great stuff!!! you seem to know your stuff... and i can really use this. I may have errored when i didn't boot the live cd and create the partition first...

    when comparing soft ware raid and hardware raid do you know the pro and cons?

    /Casper

  5. #15
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu 12.04 won't see raid 0

    Compared to fakeraid built into BIOS/board, the pros are all on mdadm side because fakeraid is also type of SW raid and it ties you up to the board model. If it does, you can hardly recover your data.

    A proper HW card has some advantage because it has its own CPU and memory, so I expect it to be little faster in the calculations of parity. But on the other hand it ties you up to a specific HW model too. Not sure if data is recoverable even if you again buy the same card. Or how long the process will take.

    With mdadm, you can literary move the disks to another machine in 5mins, and you have the raid up and running. This is because the OS assembles the raid, not any HW component. Also, it's very flexible and you can easily change raid5 to raid6 in the future for example, without any data loss. The reshape of the array will take time, but you don't need to move the data, destroy the raid5, make new raid6, and move the data back. That would take even longer.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  6. #16
    Join Date
    Jan 2013
    Beans
    29

    Re: Ubuntu 12.04 won't see raid 0

    Quote Originally Posted by darkod View Post
    Compared to fakeraid built into BIOS/board, the pros are all on mdadm side because fakeraid is also type of SW raid and it ties you up to the board model. If it does, you can hardly recover your data.

    A proper HW card has some advantage because it has its own CPU and memory, so I expect it to be little faster in the calculations of parity. But on the other hand it ties you up to a specific HW model too. Not sure if data is recoverable even if you again buy the same card. Or how long the process will take.

    With mdadm, you can literary move the disks to another machine in 5mins, and you have the raid up and running. This is because the OS assembles the raid, not any HW component. Also, it's very flexible and you can easily change raid5 to raid6 in the future for example, without any data loss. The reshape of the array will take time, but you don't need to move the data, destroy the raid5, make new raid6, and move the data back. That would take even longer.
    Valid points...

    what are your thoughts on LVM vs Software raid?

    The LVM thing sounded intrestining and i suppose a LVM volume is just as safe as a raid 1 5 6 10 or what ever as long as its not raid 0

  7. #17
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu 12.04 won't see raid 0

    Actually LVM is not a replacement for raid. On a server, you would usually create one big raid device and use it as physical device for LVM. In that case, you don't create physical partitions for each mount point, you have one large raid device.

    So, in the above example when you select the md1 large device, in the Use As you don't select ext4. Instead you select Physical Device for LVM.

    Then in the partitioning list at the top you will see the option Configure LVM (where Configure SW raid is), and you create one Volume Group and minimum one Logical Volume. If you want more separate mount points you create LV for each. When you are finished with that, back in the partitions list will be the new LV devices. You then select them one by one and set their Use As to ext4 and the needed mount points.

    Imagine it as mdadm assembling the big raid5 array from your four big partition, but not to use it "directly". Instead that md1 device serves as foundation for the VG (LVM), and the LVs are what you use directly with mount points.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  8. #18
    Join Date
    Jan 2013
    Beans
    29

    Re: Ubuntu 12.04 won't see raid 0

    ok.. i think i will try it with the lvm thing...

    another thing... when i make my raid 5 and choose 4 disk 0 disk for hot standby it still only gives me 9TB when i make the raid... since i have 3TB on each disk so in my world it should have been 12TB?

  9. #19
    Join Date
    Nov 2009
    Location
    Catalunya, Spain
    Beans
    14,560
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Ubuntu 12.04 won't see raid 0

    No, in raid5 one disk is used for parity data. That's why it can keep working if one disk fails. The total usable space of N disks in raid5 is N-1. 9TB is correct.
    Darko.
    -----------------------------------------------------------------------
    Ubuntu 18.04 LTS 64bit

  10. #20
    Join Date
    Jan 2013
    Beans
    29

    Re: Ubuntu 12.04 won't see raid 0

    Quote Originally Posted by darkod View Post
    No, in raid5 one disk is used for parity data. That's why it can keep working if one disk fails. The total usable space of N disks in raid5 is N-1. 9TB is correct.
    I know its N-1 but i could just swear that i have set up a raid 5 with more disc on a SAN before... but i guess i must remember wrong then...

    I will give this another go sometimes next week when im at work and have the time.. thanks so fare..

Page 2 of 6 FirstFirst 1234 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •