ZFS on Fedora 31 device identification problem

Using zfs-fuse on Fedora 31 and when creating my pool I diddnt use the unique ID’s for the drives so instead of what I setup:
sda: zfs mirror member
sdb: system drive
sdc: zfs mirror member

I now have:
sda: zfs mirror member
sdb: zfs mirror member
sdc: zfs system drive

So zfs sees missing drives and corrupt data and I have a faulted pool.
Does anyone know how to tell ZFS that the drives it is supposed to be using for the existing pool are sda and sdb? If I can get it to a working state I can change the pool device identifiers to persistent names…but I need to get it to a working state first.
So far googling gives a lot of results for replacing failed drives and such, but not for correcting misidentified drives. Any help or direction is appreciated.

$ sudo zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
storage - - - - - FAULTED -

$ sudo zpool status
pool: storage
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: Hardware | Oracle
scan: none requested
config:

NAME STATE READ WRITE CKSUM
storage UNAVAIL 0 0 0 insufficient replicas
mirror-0 UNAVAIL 0 0 0 insufficient replicas
sda FAULTED 0 0 0 corrupted data
sdc UNAVAIL 0 0 0 corrupted data

$ lsblk -f
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda zfs_member
├─sda1 zfs_member
├─sda2 zfs_member
├─sda3 zfs_member
└─sda4 zfs_member storage 5146455131065464718
sdb zfs_member
├─sdb1 zfs_member
└─sdb2 zfs_member storage 5146455131065464718
sdc
├─sdc1 vfat 10E6-B3E5 579.2M 3% /boot/efi
├─sdc2 ext4 19e1c835-54f8-47be-9b68-99fc9eecc0ea 678.4M 24% /boot
└─sdc3 LVM2_member ji6Ff8-TuQa-7Kkx-lrsS-apI4-lZKs-TNiGUc
├─fedora_localhost–live-root ext4 9e861f60-57b9-41b0-a3c5-13d945a65b1f 46.2G 27% /
├─fedora_localhost–live-swap swap 3d8176ce-dd26-4f5f-ba15-fbce222eaa2a [SWAP]
└─fedora_localhost–live-home ext4 3616ca91-7059-42ea-9669-e44c263a83c5 111.8G 15% /home

I resolved the issue. I had to boot up the system with the third (non zfs) drive unplugged so the 2 zfs drives regained the original sdx drive labels and then after successfully importing the pool I exported and reimported it with unique identifier labels. After that I was able to restart the PC with the third drive active again and all was well.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.