Manual mdadm with LVM and Fedora Installer

Using Fedora 36 Live USB on legacy machine with BIOS I have setup partition layout by gdisk, same on three separate 500GB drives:

Code:

Number Start (sector) End (sector) Size Code Name
1 2048 4095 1024.0 KiB EF02 BIOS boot partition
2 4096 618495 300.0 MiB FD00 Linux RAID
3 618496 34172927 16.0 GiB FD00 Linux RAID
4 34172928 976773134 449.5 GiB FD00 Linux RAID

then I did setup soft raid with use of mdadm as follows:

mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]4

mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]3

mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/sd[abc]2

next I created logical volumes on /dev/md0 (RAID5):

pvcreate /dev/md0

pvcreate /dev/md1

pvcreate /dev/md2

what gives me:

pvdisplay

— Physical volume —
PV Name /dev/md0
VG Name VGarray
PV Size 898.68 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 230062
Free PE 0
Allocated PE 230062
PV UUID 9sGOj2-CeLj-EOhY-Rgv4-baVJ-W2Mo-eR2Ice

“/dev/md1” is a new physical volume of “15.98 GiB”
— NEW Physical volume —
PV Name /dev/md1
VG Name
PV Size 15.98 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 7LO45Z-sNrU-AXTK-nv4I-7BCy-wQnU-UtGMCC

“/dev/md2” is a new physical volume of “199.00 MiB”
— NEW Physical volume —
PV Name /dev/md2
VG Name
PV Size 199.00 MiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID DVnpdm-C9rg-Wu0V-HE7Q-GRnN-B7aG-kZOZ77

then I followed to create volume group:

Code:

vgcreate VGarray /dev/md0

vgdisplay

— Volume group —
VG Name VGarray
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <898.68 GiB
PE Size 4.00 MiB
Total PE 230062
Alloc PE / Size 230062 / <898.68 GiB
Free PE / Size 0 / 0
VG UUID Ie0G3S-HvEq-2Zia-rpyC-PIww-czUy-Px03TQ

and logical volumes:

Code:

lvcreate -L 50G vgarray -n lvroot

lvcreate -L 20G VGarray -n lvvar

lvcreate -L +100%FREE VGarray -n lvhome

that results in:

Code:

lvdisplay

— Logical volume —
LV Path /dev/VGarray/lvroot
LV Name lvroot
VG Name VGarray
LV UUID 5cuQDv-fuVg-7dVa-EtQ2-jDFS-QTAH-i79qtS
LV Write Access read/write
LV Creation host, time localhost-live, 2022-07-10 09:26:31 -0400
LV Status available

open 0

LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 4096
    Block device 253:2

— Logical volume —
LV Path /dev/VGarray/lvvar
LV Name lvvar
VG Name VGarray
LV UUID U3OoH8-vQ4I-eM5P-HBZ8-LP7z-Mo1Q-g189Xr
LV Write Access read/write
LV Creation host, time localhost-live, 2022-07-10 09:27:27 -0400
LV Status available

open 0

LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 4096
    Block device 253:3

— Logical volume —
LV Path /dev/VGarray/lvhome
LV Name lvhome
VG Name VGarray
LV UUID vEW9Jp-C5xZ-ce23-iF1L-e2sb-ckHL-g0e6ji
LV Write Access read/write
LV Creation host, time localhost-live, 2022-07-10 09:30:48 -0400
LV Status available

open 0

LV Size <828.68 GiB
Current LE 212142
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 4096
    Block device 253:4

lastly exported raid config to config file:

Code:

mdadm --examine --scan >> /etc/mdadm.conf

When I run Fedora Installer it can only see RAID devices with 0 bytes of free space but none of logical volumes, why?

I would suggest is that instead of creating 3 raid arrays, you simply create 1, or at most 2, and then use LVM to split the space as desired. One raid 5 array would give you the best arrangement since you are using 3 devices anyway.
Raid 1 gives you only one device space on 2 or more devices.
Raid 5 gives 2 (or more) devices space on 3 (or more) devices, so would result in less lost space even when used for a single VG and 3 LVs.

The installer issue is the fact that you already have raid arrays and LVM LVs created.

The fedora installer needs (for legacy install) one partition for /boot that is ext4. Until the kernel starts booting the bios is unable to use an LVM partition since the LVM modules are not loaded. The same with Raid. An mdadm managed array cannot be manipulated nor used by bios.

If you create one small ext4 partition of about 1 GB that is not in a raid array nor lvm, then tell the installer to use that as /boot you may be able to do the install from there.

YOu would also need to do the manual partitioning install so you could select what to put in each specified LV. Without telling it to use LVM and manually selecting the LV the installer sees all the space as already allocated.

However, since raid is managed by mdadm and I have never tried doing a raid with the install I would have to try and see if an install could be done that way.

I know that the OS can be loaded onto an LVM volume, but am not sure if that volume can actually be part of a raid array for the initial install since I have never tried it. I have my / partition as LVM and my /home partition as LVM on raid 6. (/boot is ext4 and /boot/efi is ESP)

The installer issue is the fact that you already have raid arrays and LVM LVs created.

Yes I have and I would expect that installer will just use it.

The fedora installer needs (for legacy install) one partition for /boot that is ext4. Until the kernel starts booting the bios is unable to use an LVM partition since the LVM modules are not loaded. The same with Raid. An mdadm managed array cannot be manipulated nor used by bios.

Well, for now I wish just to be able to start the installation. The things you are describing here are step ahead. I’m in the state when Live USB is started. In the console I have list raid status from /proc/mdstat and list all logical volumes then everything is fine from the OS point of view. Then being in the Live USB session I’m starting the installer. I even tried to do that from root shell to make sure that there is no problem with access rights.

After I had done manual raid and LVM config I did reboot Live USB, then started the Installer again. This time I can see all logical volumes but still those are grayed out and reports 0B of free space as below:

Now little progress but still why he is not able to assign mount point, set file system nor see any free space there???

I think the space is seen as already allocated and must be individually selected to be removed from the ‘Unknown’ status and then assigned to the new install location.

The ‘-’ button at the bottom left will remove it from the current status then you select it again and use the ‘+’ button to assign the new status.

Note that only one of those 3 BIOS Boot partitions can actually be used for BIOS Boot. (because of bios and raid)

Looks like Fedora Installer kills entire RAID during start, almost immediately after start of the installer dmesg shows:
[ 892.474625] md127: detected capacity change from 612352 to 0
[ 892.474634] md: md127 stopped.
[ 893.172403] md126: detected capacity change from 33519616 to 0
[ 893.172409] md: md126 stopped.
[ 894.411656] md125: detected capacity change from 1884672000 to 0
[ 894.411663] md: md125 stopped.
[ 895.482294] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Quota mode: none.

Then it stops the entire RAID. Why?!

The anaconda installer apparently does not run mdadm. It seems it may be able to see the devices, but cannot manage them. This is likely a limitation of anaconda & bios.

One thing to consider is this.
The OS can usually be easily reinstalled. Loss of a device with the OS is fairly easy for recovery.
The users data usually is critical or at least valuable to them.

Maybe you should simply install the os on a single device (or try to select raid during the install and install it mirrored). After the initial install configure a raid device for /home and any other data you feel really critical, which can all be done after the initial install. Once the raid devices are set up following the initial boot you can do an update with sudo dnf update and the updated kernel and initramfs image will be built with the needed raid support for the next boot.

Finally I was able to proceed with the installation. When done all activities about storage preparation reboot is required. Without reboot the Installer won’t see anything at all.
After reboot start of the Installer which stops the RAID. Then I just started back the RAID by:

mdadm --assemble --scan

In the Installer I checked all drives that are part of the RAID and used ‘Advanced custom (Bilvet-GUI)’ mode and all logical volumes shows on the list.

So far I had to put /boot on bare RAID1 device because when tried to do that with logical volume I got an error ‘/boot file system cannot be of type lvmlv’. Have to read more about this topic.

Anyway something is clearly wrong with the Installer about such a scenario with already made-up storage structure.