Results 1 to 6 of 6

Thread: why is UEFI installation of ubuntu 24.04 lts behaving different for mdadm during inst

  1. #1
    Join Date
    Jan 2018
    Beans
    5

    why is UEFI installation of ubuntu 24.04 lts behaving different for mdadm during inst

    I am trying to setup mdadm RAID1 for the os drive for ubuntu 24.04 lts and am not sure what is going on with uefi that it is adding mount points to one of the drives even though i want no partition on it so i can create the RAID1 on the 2 drives


    please see the screenshot below





    << website wont let me upload screenshot >>


    So i went ahead and set it up anyways and this how it looks



    Code:
    root@ubuntu:~# lsblk
    NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
    sr0          11:0    1   16M  0 rom   
    xvda        202:0    0   16G  0 disk  
    ├─xvda1     202:1    0  763M  0 part  /boot/efi
    └─xvda2     202:2    0 15.3G  0 part  
      └─md0       9:0    0 15.2G  0 raid1 
        └─md0p1 259:0    0 15.2G  0 part  /
    xvdb        202:16   0   16G  0 disk  
    ├─xvdb1     202:17   0  763M  0 part  
    └─xvdb2     202:18   0 15.3G  0 part  
      └─md0       9:0    0 15.2G  0 raid1 
        └─md0p1 259:0    0 15.2G  0 part  /
    xvdc        202:32   0   32G  0 disk  
    xvde        202:64   0   32G  0 disk  
    xvdf        202:80   0   32G  0 disk  
    xvdg        202:96   0   32G  0 disk

    Code:
    root@ubuntu:~# mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Fri Sep 13 07:18:25 2024
            Raid Level : raid1
            Array Size : 15984640 (15.24 GiB 16.37 GB)
         Used Dev Size : 15984640 (15.24 GiB 16.37 GB)
          Raid Devices : 2
         Total Devices : 2
           Persistence : Superblock is persistent
    
    
           Update Time : Fri Sep 13 07:50:20 2024
                 State : clean 
        Active Devices : 2
       Working Devices : 2
        Failed Devices : 0
         Spare Devices : 0
    
    
    Consistency Policy : resync
    
    
                  Name : ubuntu-server:0
                  UUID : f3d46673:304110ea:067c80bd:d0415d2b
                Events : 87
    
    
        Number   Major   Minor   RaidDevice State
           0     202        2        0      active sync   /dev/xvda2
           1     202       18        1      active sync   /dev/xvdb2
    Code:
    root@ubuntu:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    tmpfs           194M  1.1M  193M   1% /run
    efivarfs        1.0G  1.5M 1023M   1% /sys/firmware/efi/efivars
    /dev/md0p1       15G  4.4G  9.8G  31% /
    tmpfs           970M     0  970M   0% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    /dev/xvda1      762M  6.2M  756M   1% /boot/efi
    tmpfs           194M   12K  194M   1% /run/user/1000
    
    root@ubuntu:~# fdisk -l /dev/xvda
    Disk /dev/xvda: 16 GiB, 17179869184 bytes, 33554432 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 7BAC44AC-0D7C-4532-AB09-6175EEDB9FC5
    
    
    Device       Start      End  Sectors  Size Type
    /dev/xvda1    2048  1564671  1562624  763M EFI System
    /dev/xvda2 1564672 33552383 31987712 15.3G Linux filesystem
    
    root@ubuntu:~# fdisk -l /dev/xvdb
    Disk /dev/xvdb: 16 GiB, 17179869184 bytes, 33554432 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: B63B505F-3EE0-47AC-826C-62381344E3A5
    
    
    Device       Start      End  Sectors  Size Type
    /dev/xvdb1    2048  1564671  1562624  763M EFI System
    /dev/xvdb2 1564672 33552383 31987712 15.3G Linux filesystem

    As you can see the boot partition is still pointing to the first partition of the first disk /dev/xvda


    Is this not problematic? i think i will just avoid UEFI boot firmware altogether


    Is this expected behavior and will this cause any issues later on?
    setting up RAID1 i want both drives to be exactly same and right now efi will not let that happen


    for example if the drive with the /boot/efi is the one that fails, i hope this wont be a problem when relying on the protection of RAID1 but it essentially setup for failure from the beginning


    please share comments
    Last edited by ubernoobie; 3 Weeks Ago at 11:24 PM.

  2. #2
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: why is UEFI installation of ubuntu 24.04 lts behaving different for mdadm during

    I'd appreciate it if you'd edit the first post and wrap all the terminal output with forum 'code tags'. This will use a monospaced font so columns will line up. The Advanced Editor (or Adv Reply) has a code-tag button "#" for this purpose. Of course, you can manually replace "code" after using the bold, italics, or quote tag buttons too at the start and end of the code block.

    I'd be afraid to make a mistake and mis-interpret the output otherwise.

    Why don't you want partitions for your RAID devices? Without partitions, some disk tools won't work. The best practice for RAID setup with mdadm or LVM is to use partitions so you can set the exact size across all the devices so if 1 of those devices needs to be replaced, you don't have to ensure the exact same model HDD is provided. By using partitions, we can set an exact size for the partition regardless of the actual HDD used.

    mdadm can only really be used for data areas, not for the OS. If you want RAID1 on the OS, then only 2 ways that I know to accomplish that.
    a) Use HW-RAID with a reputable LSI HBA
    b) Use LVM - setup the OS using LVM and after the installation is finished, add another partition to the VG and do lvconvert to make the two physical partitions mirrored. LVM RAID is sorta ugly when looking at it from the disk and LV layout, but it does work.

    For some reference:
    mdadm RAID1 on partitions:
    Code:
    $ more /proc/mdstat 
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md2 : active raid1 sdb2[1] sdd2[0]
          1338985536 blocks [2/2] [UU]
    
    $ lsblk 
    NAME                              TYPE  FSTYPE              SIZE FSAVAIL FSUSE% LABEL       MOUNTPOINT
    sdb                               disk                      1.8T                            
    ├─sdb1                            part                    125.5M                            
    ├─sdb2                            part  linux_raid_member   1.3T                            
    │ └─md2                           raid1 ext4                1.3T                R2-Array    
    └─sdb3                            part  ext4                586G                Back2       
    sdd                               disk                      1.8T                            
    ├─sdd1                            part                    125.5M                            
    ├─sdd2                            part  linux_raid_member   1.3T                            
    │ └─md2                           raid1 ext4                1.3T                R2-Array    
    └─sdd3                            part  ext4                586G                Back1       
    
    $ sudo fdisk -l /dev/sdb
    Disk /dev/sdb: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: WDC WD20EFRX-68A
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: EBB45227-D7AA-4DEA-9413-D794D995C37E
    
    Device          Start        End    Sectors   Size Type
    /dev/sdb1          34     257039     257006 125.5M Microsoft basic data
    /dev/sdb2      257040 2678228279 2677971240   1.3T Microsoft basic data
    /dev/sdb3  2678228280 3907024064 1228795785   586G Microsoft basic data
    
    $ sudo fdisk -l /dev/sdd
    Disk /dev/sdd: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: WDC WD20EFRX-68A
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 4D1B8BF0-5379-4EC4-A4AE-2A2FF5BAC6DB
    
    Device          Start        End    Sectors   Size Type
    /dev/sdd1          34     257039     257006 125.5M Microsoft basic data
    /dev/sdd2      257040 2678228279 2677971240   1.3T Microsoft basic data
    /dev/sdd3  2678228280 3907024064 1228795785   586G Microsoft basic data
    And for an LVM RAID1. This is on a different system, inside a VM:
    Code:
    $ lsblk 
    NAME                       SIZE TYPE FSTYPE      LABEL MOUNTPOINT
    vda                         15G disk                   
    ├─vda1                       1M part                   
    ├─vda2                     768M part ext4              /boot
    └─vda3                    13.2G part LVM2_member       
      ├─vg--00-lv--0_rmeta_0     4M lvm                    
      │ └─vg--00-lv--0          10G lvm  ext4              /
      ├─vg--00-lv--0_rimage_0   10G lvm                    
      │ └─vg--00-lv--0          10G lvm  ext4              /
      └─vg--00-lv--swap          1G lvm  swap              [SWAP]
    vdb                         15G disk                   
    ├─vdb1                       1M part                   
    ├─vdb2                     768M part                   
    └─vdb3                    13.2G part LVM2_member       
      ├─vg--00-lv--0_rmeta_1     4M lvm                    
      │ └─vg--00-lv--0          10G lvm  ext4              /
      └─vg--00-lv--0_rimage_1   10G lvm                    
        └─vg--00-lv--0          10G lvm  ext4              /
    
    $ sudo fdisk -l /dev/vda
    Disk /dev/vda: 15 GiB, 16106127360 bytes, 31457280 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: CCD1DD1C-1587-4B4E-AAB3-8FC49103FC2B
    
    Device       Start      End  Sectors  Size Type
    /dev/vda1     2048     4095     2048    1M BIOS boot
    /dev/vda2     4096  1576959  1572864  768M Linux filesystem
    /dev/vda3  1576960 29358079 27781120 13.2G Linux filesystem
    
    $ sudo fdisk -l /dev/vdb
    Disk /dev/vdb: 15 GiB, 16106127360 bytes, 31457280 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 6AD324FD-9704-4973-AD17-DF8258AA0888
    
    Device       Start      End  Sectors  Size Type
    /dev/vdb1     2048     4095     2048    1M BIOS boot
    /dev/vdb2     4096  1576959  1572864  768M Linux filesystem
    /dev/vdb3  1576960 29358079 27781120 13.2G Linux filesystem
    And some LVM information:
    Code:
    $ sudo pvs
      PV         VG    Fmt  Attr PSize   PFree
      /dev/vda3  vg-00 lvm2 a--  <13.25g 2.24g
      /dev/vdb3  vg-00 lvm2 a--  <13.25g 3.24g
    
    $ sudo vgs
      VG    #PV #LV #SN Attr   VSize  VFree
      vg-00   2   2   0 wz--n- 26.49g 5.48g
    
    $ sudo lvs
      LV      VG    Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      lv-0    vg-00 rwi-aor--- 10.00g                                    100.00          
      lv-swap vg-00 -wi-ao----  1.00g
    I'm not using UEFI inside the VM. The first 2 partitions in this setup aren't mirrored, just the the / file system is: The swap LV isn't mirrored either.
    Code:
    $ dft
    Filesystem               Type  Size  Used Avail Use% Mounted on
    /dev/mapper/vg--00-lv--0 ext4  9.8G  6.1G  3.2G  67% /
    /dev/vda2                ext4  739M  313M  373M  46% /boot
    I find the lsblk (I use lots of specific display options in an alias to get the output above), to be ugly with LVM RAID1. I setup this server about 2 yrs ago now just as a place to play with LVM-RAID. I did remove the 2nd disk from the running VM early on and it kept going, but if it wasn't put back at boot, the OS refused to boot. With mdadm over the decades, I've booted lots of times with a failed or missing HDD from the RAID setup. When I moved to SSDs for the OS, I stopped using RAID. SSD failures, at least for quality SSDs, are extremely low. The complexity of RAID just isn't worth it to me.

    Anyway, you have some data points for consideration as you figure out your deployment. Hope it helps in some small way.

  3. #3
    Join Date
    Jan 2018
    Beans
    5

    Re: why is UEFI installation of ubuntu 24.04 lts behaving different for mdadm during

    Quote Originally Posted by TheFu View Post

    I'm not using UEFI inside the VM. The first 2 partitions in this setup aren't mirrored, just the the / file system is: The swap LV isn't mirrored either.
    I tried it with legacy BIOS and it worked fine below
    the UEFI BIOS is the one with issue
    Ubuntu/Canonical need to check this issue out or how can i report this?

    here is how it looks on legacy BIOS which is how how it should look like

    Code:
    root@ubuntu:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    tmpfs           792M  1.1M  791M   1% /run
    /dev/md0p2      7.8G  2.4G  5.0G  33% /
    tmpfs           3.9G     0  3.9G   0% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           792M   12K  792M   1% /run/user/0
    root@ubuntu-zfs:~# cat /proc/mdstat 
    Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md0 : active raid1 xvda2[0] xvdb2[1]
          16764928 blocks super 1.2 [2/2] [UU]
          
    unused devices: <none>
    
    root@ubuntu:~# lsblk
    NAME        MAJ:MIN RM SIZE RO TYPE  MOUNTPOINTS
    sr0          11:0    1  16M  0 rom   
    xvda        202:0    0  16G  0 disk  
    ├─xvda1     202:1    0   1M  0 part  
    └─xvda2     202:2    0  16G  0 part  
      └─md0       9:0    0  16G  0 raid1 
        ├─md0p1 259:0    0   8G  0 part  [SWAP]
        └─md0p2 259:1    0   8G  0 part  /
    xvdb        202:16   0  16G  0 disk  
    ├─xvdb1     202:17   0   1M  0 part  
    └─xvdb2     202:18   0  16G  0 part  
      └─md0       9:0    0  16G  0 raid1 
        ├─md0p1 259:0    0   8G  0 part  [SWAP]
        └─md0p2 259:1    0   8G  0 part  /
    xvdc        202:32   0  32G  0 disk  
    xvde        202:64   0  32G  0 disk  
    xvdf        202:80   0  32G  0 disk  
    xvdg        202:96   0  32G  0 disk  
    
    root@ubuntu:~# fdisk -l /dev/xvda
    Disk /dev/xvda: 16 GiB, 17179869184 bytes, 33554432 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: F1D47DD3-656B-49BE-AAD4-7783BFD1591C
    
    
    Device     Start      End  Sectors Size Type
    /dev/xvda1  2048     4095     2048   1M BIOS boot
    /dev/xvda2  4096 33552383 33548288  16G Linux filesystem
    
    root@ubuntu:~# fdisk -l /dev/xvdb
    Disk /dev/xvdb: 16 GiB, 17179869184 bytes, 33554432 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: E4B91552-B82E-412E-9583-7219F1EF2A73
    
    
    Device     Start      End  Sectors Size Type
    /dev/xvdb1  2048     4095     2048   1M BIOS boot
    /dev/xvdb2  4096 33552383 33548288  16G Linux filesystem
    
    root@ubuntu:~# mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Sun Sep 15 11:18:02 2024
            Raid Level : raid1
            Array Size : 16764928 (15.99 GiB 17.17 GB)
         Used Dev Size : 16764928 (15.99 GiB 17.17 GB)
          Raid Devices : 2
         Total Devices : 2
           Persistence : Superblock is persistent
    
    
           Update Time : Tue Sep 17 22:17:32 2024
                 State : clean 
        Active Devices : 2
       Working Devices : 2
        Failed Devices : 0
         Spare Devices : 0
    
    
    Consistency Policy : resync
    
    
                  Name : ubuntu-server:0
                  UUID : 1f59c9e3:2d345d82:0bff2d0a:d64fcc63
                Events : 61
    
    
        Number   Major   Minor   RaidDevice State
           0     202        2        0      active sync   /dev/xvda2
           1     202       18        1      active sync   /dev/xvdb2

  4. #4
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: why is UEFI installation of ubuntu 24.04 lts behaving different for mdadm during

    Quote Originally Posted by ubernoobie View Post
    Ubuntu/Canonical need to check this issue out or how can i report this?
    How to report a but (sic) in Ubuntu Server
    https://documentation.ubuntu.com/ser.../report-a-bug/ is what google found. Nobody here works for Canonical.

  5. #5
    Join Date
    Jan 2018
    Beans
    5

    Re: why is UEFI installation of ubuntu 24.04 lts behaving different for mdadm during

    So from what you have seen what you think?
    Why is UEFI handling mdadm raid1 for boot drive wrong?

    i mean not sure if this is a bug but definitely worth knowing if that was intended or by mistake because that raid is useless if tha drive with the boot partition dies that is it for the whole array

    agree or not?

  6. #6
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: why is UEFI installation of ubuntu 24.04 lts behaving different for mdadm during

    Quote Originally Posted by ubernoobie View Post
    So from what you have seen what you think?
    I put up with UEFI, I don't know much about it other than it solves a problem that I've never had. More and more of those "solutions" have been happening the last 10 yrs, often driven by MSFT problems seeking solutions that are forced on the entire industry.

    Quote Originally Posted by ubernoobie View Post
    Why is UEFI handling mdadm raid1 for boot drive wrong?
    I didn't know that mdadm could be used for booting under Ubuntu. Last time I tried it, was with CentOS over 10 yrs ago, which made the Ubuntu Server installer look really bad for disk setup in comparison. Perhaps it isn't tested?

    Quote Originally Posted by ubernoobie View Post
    i mean not sure if this is a bug but definitely worth knowing if that was intended or by mistake because that raid is useless if tha drive with the boot partition dies that is it for the whole array

    agree or not?
    I don't know. If you feel it is a bug, then you should open a bug and provide the exact steps to reproduce it. I know the Desktop Ubuntu installer broke many things in 24.04, like removing LVM support, which made those flavors completely useless to me. The server installer appeared to me as to have not really changed, but I only installed 1 instance and didn't bother with mdadm.

    In short, I'm not really any use for your questions. I tried to show a workaround that didn't use mdadm.

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •