Just a note. Clonezilla does not do a reinstall. It does a restore of what you previously cloned from “somewhere”.
You do not say if the system has been updated, where the copy you restored came from, or anything else helpful. For all we know the image could be 5 months old, and came from a different machine which could introduce errors by itself.
I would suggest, that since it seems the system does boot from that cloned image, you should first do a full upgrade.
sudo dnf clean all
sudo dnf upgrade
sudo dnf distro-sync
Then if you get the same error again we know you are fully upgraded and there is a known point to work from
The error you report may be file system related or memory related with the address given and pointing to xfs.
I’ll have to see if I can even boot it to a desktop as now after a hard boot this is what’s on the console. Indeed I used an image from a working workstation about a month ago. The linked thread shows the previous errors.
No, it will install to the virtual disk that runs the live session. The limitation is that it will only be a virtual install and must have adequate space in system RAM. Most systems can handle that much and more with a live system running.
Every error displayed there shows the same block and there are over 1300 errors in the smart log.
My suggestion is to immediately replace that drive.
Once you have the new drive to replace it you can use ddrescue to copy the device if you are unable to access the file system and copy the data. Ddrescue will allow making an image of the device while skipping the blocks that are non-recoverable and you can use it for the whole device or for a single partition.
You might also consider running badblocks on that device to see the extent of the failure.
Yes, I made an error in that smartctl command. I was thinking of -v for verbose as in most commands and smartctl uses the -v differently. (That is what I get for not checking the man page before posting.)
I also tried to remove those bad blocks but I’m getting this as it’s LVM and XFS:
e2fsck -l ../badblocks.txt /dev/mapper/fedora_localhost--live-home
e2fsck 1.45.6 (20-Mar-2020)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/mapper/fedora_localhost--live-home
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
e2fsck -b 32768 <device>
/dev/mapper/fedora_localhost--live-home contains a xfs file system labelled 'home'
You should never try to remove bad blocks from the system. They are marked bad because they are not usable and the system has already relocated any available data to a new location. Marking them as bad prevents the system from attempting to use them again.
The system automatically marks blocks as bad when needed. However, there is a limit to how many can be marked and I would guess that limit has already been exceeded simply by looking at the number of smart errors reported and the repeated errors on the same block from your log.
You also cannot mark blocks bad unless the partition of interest has been mounted for writing.
The more you run it the more damage occurs and the less likely that data recovery will be successful. I suggest just power it off and wait for the replacement, unless you have a drive with enough space for an image of that partition. If you have the space then start ddrescue on it and let it run to create the image while waiting for the new drive.