Not all RAID devices are supported. In Linux, LSI RAID cards (and rebranded versions used by HP/Dell/IBM) are the normal HW-RAID HBA in servers.
You need to learn that vendors don't make drivers for Linux for all their devices. In general, the cheaper the device, the less well it is supported by Linux. Blame the vendors. Sometimes **really**, **really** popular devices will eventually get Linux support because someone in the community has the skill and time to port code or find code for a similar chip. It is often a personal itch for that person. Then they post their driver and how-to compile it on gitlab for the world to use - at their own risk.
For home Linux users, SW-RAID has been the rule for almost 20 yrs. It is more flexible, just as fast, thanks to our CPUs, and allows easy migration to different systems since it isn't tied to hardware.
Lastly, there's something called "Fake-RAID", which is provided by motherboards. This has all the limitations of HW-RAID, but still uses the CPU to do most of the work. I've always avoided Fake-RAID, since if a motherboard fails, the data is generally gone. Inaccessible. Unless an exact same model and version of the motherboard are found. The same applies to HW-RAID. Enterprises can have a spare on-sites for when the RAID card fails. Most home users do not.
If you are going to dual boot, maybe the best option is to have a small OS boot device (non-RAID) and hopefully find a driver that will support the RAID post-boot. Google found this https://unix.stackexchange.com/quest...oller-on-linux which says the chips in the marvell-88se9230 are in the kernels already.
Beware that "support" and "boot support" are very different in Linux. In general, if you see the RAID controller BIOS BEFORE grub, then you can treat it like any HDD. If special drivers need to be used at OS boot time to see the hardware, then you probably cannot boot from it.
If you want RAID1 for the OS (not the boot, EFI, FAT32 partition) then you can install using LVM to a single device, then add RAID1 using LVM lvconvert later to make it RAID1. That's for everything except the EFI and /boot/ partitions. I have a 22.04 server setup this way. It isn't using EFI to boot, however. Don't know if this will help, but here's the storage layout using LVM with RAID1 for /.
Code:
$ lsblk
NAME SIZE TYPE FSTYPE LABEL MOUNTPOINT
vda 15G disk
├─vda1 1M part
├─vda2 768M part ext4 /boot
└─vda3 13.2G part LVM2_member
├─vg--00-lv--0_rmeta_0 4M lvm
│ └─vg--00-lv--0 10G lvm ext4 /
├─vg--00-lv--0_rimage_0 10G lvm
│ └─vg--00-lv--0 10G lvm ext4 /
└─vg--00-lv--swap 1G lvm swap [SWAP]
vdb 15G disk
├─vdb1 1M part
├─vdb2 768M part
└─vdb3 13.2G part LVM2_member
├─vg--00-lv--0_rmeta_1 4M lvm
│ └─vg--00-lv--0 10G lvm ext4 /
└─vg--00-lv--0_rimage_1 10G lvm
└─vg--00-lv--0 10G lvm ext4 /
and the fdisk version (simplified to remove all the LVM objects, better seen above):
Code:
$ sudo fdisk -l
Disk /dev/vda: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CCD1DD1C-1587-4B4E-AAB3-8FC49103FC2B
Device Start End Sectors Size Type
/dev/vda1 2048 4095 2048 1M BIOS boot
/dev/vda2 4096 1576959 1572864 768M Linux filesystem
/dev/vda3 1576960 29358079 27781120 13.2G Linux filesystem
Disk /dev/vdb: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6AD324FD-9704-4973-AD17-DF8258AA0888
Device Start End Sectors Size Type
/dev/vdb1 2048 4095 2048 1M BIOS boot
/dev/vdb2 4096 1576959 1572864 768M Linux filesystem
/dev/vdb3 1576960 29358079 27781120 13.2G Linux filesystem
....
The df -Th ....
Code:
$ dft
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg--00-lv--0 ext4 9.8G 4.6G 4.7G 50% /
/dev/vda2 ext4 739M 290M 395M 43% /boot
The full LVM layout using LVM tools:
Code:
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/vda3 vg-00 lvm2 a-- <13.25g 2.24g
/dev/vdb3 vg-00 lvm2 a-- <13.25g 3.24g
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vg-00 2 2 0 wz--n- 26.49g 5.48g
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv-0 vg-00 rwi-aor--- 10.00g 100.00
lv-swap vg-00 -wi-ao---- 1.00g
Note that lv-0 is RAID. lv-swap isn't on RAID. I don't see the point in RAID for swap. As a server, used to play with different minimal stuff, I didn't give it much storage. It doesn't have any GUI. No GUI means less than 5GB of storage is needed. That's actually very bloated compared to some other distro server installs.
Installing with LVM isn't trivial. Canonical's installers make doing it a pain, so I setup the LVM storage how I want it before doing the install to disk using a custom script that's only for my systems and needs. Everyone has different needs for their different system. The u22.04 server shown above is very simple and not what I use in production for LVM layouts.
Anyway, hopefully, I've provided some other ideas for consideration. You might want to consider using ZFS if you are just starting out in Linux. There are many great things about it, including RAIDz and RAIDz2. Alas, Windows doesn't know anything about ZFS.