I'd appreciate it if you'd edit the first post and wrap all the terminal output with forum 'code tags'. This will use a monospaced font so columns will line up. The Advanced Editor (or Adv Reply) has a code-tag button "#" for this purpose. Of course, you can manually replace "code" after using the bold, italics, or quote tag buttons too at the start and end of the code block.
I'd be afraid to make a mistake and mis-interpret the output otherwise.
Why don't you want partitions for your RAID devices? Without partitions, some disk tools won't work. The best practice for RAID setup with mdadm or LVM is to use partitions so you can set the exact size across all the devices so if 1 of those devices needs to be replaced, you don't have to ensure the exact same model HDD is provided. By using partitions, we can set an exact size for the partition regardless of the actual HDD used.
mdadm can only really be used for data areas, not for the OS. If you want RAID1 on the OS, then only 2 ways that I know to accomplish that.
a) Use HW-RAID with a reputable LSI HBA
b) Use LVM - setup the OS using LVM and after the installation is finished, add another partition to the VG and do lvconvert to make the two physical partitions mirrored. LVM RAID is sorta ugly when looking at it from the disk and LV layout, but it does work.
For some reference:
mdadm RAID1 on partitions:
Code:
$ more /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[1] sdd2[0]
1338985536 blocks [2/2] [UU]
$ lsblk
NAME TYPE FSTYPE SIZE FSAVAIL FSUSE% LABEL MOUNTPOINT
sdb disk 1.8T
├─sdb1 part 125.5M
├─sdb2 part linux_raid_member 1.3T
│ └─md2 raid1 ext4 1.3T R2-Array
└─sdb3 part ext4 586G Back2
sdd disk 1.8T
├─sdd1 part 125.5M
├─sdd2 part linux_raid_member 1.3T
│ └─md2 raid1 ext4 1.3T R2-Array
└─sdd3 part ext4 586G Back1
$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EBB45227-D7AA-4DEA-9413-D794D995C37E
Device Start End Sectors Size Type
/dev/sdb1 34 257039 257006 125.5M Microsoft basic data
/dev/sdb2 257040 2678228279 2677971240 1.3T Microsoft basic data
/dev/sdb3 2678228280 3907024064 1228795785 586G Microsoft basic data
$ sudo fdisk -l /dev/sdd
Disk /dev/sdd: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4D1B8BF0-5379-4EC4-A4AE-2A2FF5BAC6DB
Device Start End Sectors Size Type
/dev/sdd1 34 257039 257006 125.5M Microsoft basic data
/dev/sdd2 257040 2678228279 2677971240 1.3T Microsoft basic data
/dev/sdd3 2678228280 3907024064 1228795785 586G Microsoft basic data
And for an LVM RAID1. This is on a different system, inside a VM:
Code:
$ lsblk
NAME SIZE TYPE FSTYPE LABEL MOUNTPOINT
vda 15G disk
├─vda1 1M part
├─vda2 768M part ext4 /boot
└─vda3 13.2G part LVM2_member
├─vg--00-lv--0_rmeta_0 4M lvm
│ └─vg--00-lv--0 10G lvm ext4 /
├─vg--00-lv--0_rimage_0 10G lvm
│ └─vg--00-lv--0 10G lvm ext4 /
└─vg--00-lv--swap 1G lvm swap [SWAP]
vdb 15G disk
├─vdb1 1M part
├─vdb2 768M part
└─vdb3 13.2G part LVM2_member
├─vg--00-lv--0_rmeta_1 4M lvm
│ └─vg--00-lv--0 10G lvm ext4 /
└─vg--00-lv--0_rimage_1 10G lvm
└─vg--00-lv--0 10G lvm ext4 /
$ sudo fdisk -l /dev/vda
Disk /dev/vda: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CCD1DD1C-1587-4B4E-AAB3-8FC49103FC2B
Device Start End Sectors Size Type
/dev/vda1 2048 4095 2048 1M BIOS boot
/dev/vda2 4096 1576959 1572864 768M Linux filesystem
/dev/vda3 1576960 29358079 27781120 13.2G Linux filesystem
$ sudo fdisk -l /dev/vdb
Disk /dev/vdb: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6AD324FD-9704-4973-AD17-DF8258AA0888
Device Start End Sectors Size Type
/dev/vdb1 2048 4095 2048 1M BIOS boot
/dev/vdb2 4096 1576959 1572864 768M Linux filesystem
/dev/vdb3 1576960 29358079 27781120 13.2G Linux filesystem
And some LVM information:
Code:
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/vda3 vg-00 lvm2 a-- <13.25g 2.24g
/dev/vdb3 vg-00 lvm2 a-- <13.25g 3.24g
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vg-00 2 2 0 wz--n- 26.49g 5.48g
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv-0 vg-00 rwi-aor--- 10.00g 100.00
lv-swap vg-00 -wi-ao---- 1.00g
I'm not using UEFI inside the VM. The first 2 partitions in this setup aren't mirrored, just the the / file system is: The swap LV isn't mirrored either.
Code:
$ dft
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg--00-lv--0 ext4 9.8G 6.1G 3.2G 67% /
/dev/vda2 ext4 739M 313M 373M 46% /boot
I find the lsblk (I use lots of specific display options in an alias to get the output above), to be ugly with LVM RAID1. I setup this server about 2 yrs ago now just as a place to play with LVM-RAID. I did remove the 2nd disk from the running VM early on and it kept going, but if it wasn't put back at boot, the OS refused to boot. With mdadm over the decades, I've booted lots of times with a failed or missing HDD from the RAID setup. When I moved to SSDs for the OS, I stopped using RAID. SSD failures, at least for quality SSDs, are extremely low. The complexity of RAID just isn't worth it to me.
Anyway, you have some data points for consideration as you figure out your deployment. Hope it helps in some small way.
Bookmarks