seems a serious issue with latest silverblue updates boot stuck - a start job is running for /dev/disk/by-uuid
I believe its related to the newest kernel, i am running silverblue 35, fine on kernel 5.16.20-200.fc35.x86_64, but a recent update including the following repos Enabled rpm-md repositories: fedora-35-updates fedora-35 vivaldi rpmfusion-free rpmfusion-nonfree rpmfusion-nonfree-updates rpmfusion-free-updates slack wants
and a syctl reboot after update fails for continue past a start job is running for /dev/disk/by-uuid/8d21faa4-0cac-45b9-9d86-2b54e2fe1f02
so how to debug.
It looks like it’s having trouble mounting a disk. If you have more than one disk, it’s possible that one of them might not be healthy.
kernel 5.16 boots fine on it, btrfs enabled triple 2TB NVME drives, and their new. doubtful its a faulty drive.
dmesg | grep nvme
[ 1.726231] nvme nvme0: pci function 0000:03:00.0
[ 1.726427] nvme nvme1: pci function 10000:01:00.0
[ 1.726442] nvme 10000:01:00.0: PCI INT A: not connected
[ 1.726532] nvme nvme2: pci function 10000:02:00.0
[ 1.726547] nvme 10000:02:00.0: PCI INT A: not connected
[ 1.732761] nvme nvme0: missing or invalid SUBNQN field.
[ 1.732783] nvme nvme0: Shutdown timeout set to 8 seconds
[ 1.732783] nvme nvme1: missing or invalid SUBNQN field.
[ 1.732793] nvme nvme2: missing or invalid SUBNQN field.
[ 1.732798] nvme nvme1: Shutdown timeout set to 8 seconds
[ 1.732810] nvme nvme2: Shutdown timeout set to 8 seconds
[ 1.759557] nvme nvme0: 32/0/0 default/read/poll queues
[ 1.759791] nvme nvme1: 32/0/0 default/read/poll queues
[ 1.759902] nvme nvme2: 32/0/0 default/read/poll queues
[ 1.762881] nvme0n1: p1 p2
[ 1.763281] nvme2n1: p1
[ 1.763349] nvme1n1: p1
[ 1.770851] BTRFS: device label fedora_fedora devid 2 transid 28418 /dev/nvme1n1p1 scanned by systemd-udevd (728)
[ 1.771226] BTRFS: device label fedora_fedora devid 3 transid 28418 /dev/nvme2n1p1 scanned by systemd-udevd (773)
[ 1.772391] BTRFS: device label fedora_fedora devid 1 transid 28418 /dev/nvme0n1p2 scanned by systemd-udevd (780)
[ 3.293406] BTRFS info (device nvme0n1p2): flagging fs with big metadata feature
[ 3.293410] BTRFS info (device nvme0n1p2): disk space caching is enabled
[ 3.293411] BTRFS info (device nvme0n1p2): has skinny extents
[ 3.338887] BTRFS info (device nvme0n1p2): enabling ssd optimizations
[ 3.880891] BTRFS info (device nvme0n1p2): use zstd compression, level 1
[ 3.880895] BTRFS info (device nvme0n1p2): disk space caching is enabled
[ 6.209717] EXT4-fs (nvme0n1p1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
[ 6.280648] BTRFS info (device nvme0n1p2): disk space caching is enabled
[ 6.280846] BTRFS info (device nvme0n1p2): disk space caching is enabled
bump… nobody has any other ideas?
Hi, I am sorry that I can not provide any insightful information besides saying that this is also affecting me.
I can boot no problem in Silverblue 35, but Silverblue 36 gives me this problem more often than not. To the point that I have not been able to finish booting in order to obtain a previous boot log, so I can only attach a poor picture:
rpm-ostree deployment information:
Version: 36.20220511.0 (2022-05-11T00:48:12Z)
GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
RemovedBasePackages: firefox 100.0-2.fc36
LayeredPackages: akmod-nvidia xorg-x11-drv-nvidia
okay welp, that says at least im not the only one. are you using multiple drives i have 3x2TB ssd configured, and yes same F35 boots fine. F36, and Silverblue 36 same result.
Yes, indeed: I have an HDD (1TB), an SSD (250GB) and a NVME (500GB).
My btrfs volume spans partitions on both the NVME and the SSD. There are other partitions but are not mounted on boot.