Recommendations for installing Fedora 34 on SSD and HDD

What is a current (Fedora 34) recommended setup for a new system with both an SSD and a HDD?

I don’t want this to be specific to my setup (although I am putting together a new computer), so here are the broad-strokes assumptions I’d like recommendations for:

  • A modern SSD with decent size (from a couple of hundred gigabytes up to a terabyte or more) and a lifetime write capacity that’s not worth worrying about.
  • A HDD that’s bigger than (or at least as big as) the SSD.
  • Looking to benefit from the SSD’s speed as much as possible, while using the HDD for things that aren’t speed-critical, like holding media files and documents.
  • (Optionally) Leaving room to dual-boot the system with Windows, but not trying to work around an existing installation.

I would also like the why and the how as much as possible: what are the benefits/drawbacks, and what options are needed to make it happen?

I’ve done a lot of searching, and there have been questions about this before—not just here, but on Reddit, on FedoraForum, on non-Fedora communities, etc.—but everything I could find is quite old, quite specific to a particular situation, quite technical, or some combination of them. Here’s what I’ve been able to piece together so far:

One option is to just let Anaconda pick the defaults. As far as I can tell (correct me if I’m wrong!), this will:

  • Make a boot partition on the SSD
  • Combine the remaining SSD and the entire HDD into a logical Btrfs partition.
  • Distribute data more or less evenly across the two drives, without any consideration for the speed benefits of the SSD.

Another option is to manually choose the partitions. The general gist I get is:

  • Most (all?) of the install should go on the SSD. Certainly /boot and root go here. (Some people talk about moving /var or /var/log to a HDD partition?)
  • The HDD should be turned into its own partition(s) that you mount under /home, e.g. documents or downloads or music.

A third suggestion I met, but with little detail, was using (part of?) the SSD as a cache for the HDD.

  • I gather that this would mean you lose some capacity from the SSD, but you don’t have to plan which files go where—the cache ensures your most-used files are in fast storage.
  • Also apparently you have to watch out for possible corruption if writes to the cache and the HDD fall out of sync.

And lastly, there’s not a lot out there about dual-boot setups that doesn’t assume you’re trying to scrape whatever meagre space you can off of an existing Windows installation! I don’t know if I’m going to have a dual-boot myself, but if I do I’d like to plan for it now.

  • Seems like just splitting both drives 50-50 and letting each OS do its own things would be the most straightforward option?
  • But if the HDD gets used for media files etc., it’d be nice to be able to access that from either OS. In which case, what should it be? NTFS?

Oh, one last thing: What is even happening with swap these days? Do I need or want a swap partition?

2 Likes

IMO, this a terrible idea. It is better to keep disks with drastic performance differences separate.

A variation on this is what I would recommend. Since Fedora is btrfs by default. Just format each drive with a btrfs partition then create the subvolumes you want in each place. For example, you might put subvolumes for /, /home, /var/log and /var/cache on your SSD. Then put subvolumes for ~/Documents, ~/Videos, ~/Downloads on the HDD.

Since btrfs shares the space between the subvolumes, you can carve it up as much as you want to you want to.

Unless your SSD is extremely small or your use case highly specific, I don’t think this the best approach.

NTFS is probably the best choice. Another would be exFAT but I would probably choose NTFS over exfat for internal drives where you are sharing data between Windows and Linux.

Either way, make sure that you disable “Fast Startup” in windows and do not let windows hibernate. Otherwise, windows will leave the drives in “dirty” state.

It depends on your usage and how much RAM you have. Fedora has switched to zram(compressed swap space in memory) by default. I have actually since switched all my machines to this. The performance increase on machines with 8GB or less RAM has been impressive.

The big problem with zram is that it doesn’t support hibernation so if you want hibernation you will also need a swap partition/file.

EDIT: I should probably add that there isn’t really a “correct” way to combine an SSD and an HDD. It is all a matter of opinion and personal preference.

3 Likes

I guess I do not answer all your questions, but some hints:

I own a Samsung 970 Evo Plus(!) 500 GB NVMe (which is in some cases better then the current Samsung 980, even while the Evo is PCIe 3, means “newer” is not necessarly “better”)
You will benefit from an fast boot (read access) and when shuffle around big data (DVD and such).
currently my usecase is writing ~4 TB per month (so the NVMe will last for 6,25 years which is anyway the time for looking to another box cause all other component become old)

you should keep ~25 % of the ssd/NVMe unpartitioned (=> Over-Provisioning; “unpartioned” is apart from free space of an partitoned disk)
you should do an secure erase time by time (~ 2-3 years, usually when new installs happens)
my NVMe has 1 GB /boot (ext4) and the rest (apart from the unpartitioned space) is a BTRFS pool with subvolumes for /, /home and an /home/<username>/DATA (I’m the alone user of this box).
I’ve not idea why I would need a subvolume for /var, etc.

the DATA subvolume carries all my important data which usually won’t change much e.g. a large Folder Media with subfolders Music, Video in it. (I adjusted ~/.config/user-dirs.dirs to point to the subfolder Music, …) and big data like ISO’s and such.

The subfolder DATA was original a second disk.
A second disk with all the important data is handy in the case of new installs which I usually do every ~2 years.
after an new install I just need to create the folder DATA under my home and then adjust /etc/fstab.

I have a second aged SATA ssd in the box which carries a windows 8.1 and original a swap partition which is since Fedora 34 in-active.
I boot that windows via VBOX (raw disk access) or via grub or via Bios Boot menue.

that said I would suggest the following:

  • keep you ssd (NVMe) for linux
  • split the HDD for DATA and maybe a on the first partitions for an installed Windows (~50 GB).

Layout:

  1. Boot partition of windows
  2. System partition of Windows
  3. DATA (mounted under linux, btrfs or ext4)
  4. a partition for data exchange (NTFS, exfat)

to lower writing on the ssd/NVme:
adjust /var/tmp to be an tmpfs in /etc/fstab
symlink the most of the folders of ~./cache to /tmp, e.g. “ln -s /tmp ~/.cache/tracker3” (dito for firefox cache files)

think about a backup to an external disk

Thanks for your input! I’ve been tinkering around with possibilities, but I’m out of time to play around with this for now. I wanted to share my experiences and my thanks, but the story isn’t over yet, and I’ve got more things to try. But here’s where I’m at so far…

Yeah, that’s about what I figured. But since it’s the default (or at least, I think it is), I thought maybe there was some smart behaviour in the OS that I just didn’t know about.

That sounds good to me. Unfortunately I’ve found Anaconda’s partitioning interface to be very unintuitive, and I’m not sure how to go about setting this up.

While I still want the question to be relatively system-neutral, in my specific case I have a 1 TB Samsung EVO 970 (not the Plus) and a 2 TB WD Blue 7200rpm. Anaconda reports their actual capacities as 931.51 GiB and 1.82 TiB, respectively, for a total of 2.73 TiB.

I chose “Custom” storage configuration, and then just to see what would happen, I chose “create automatically”. This gave me four mount points: /boot (1024 MiB ext4), /boot/efi (600 MiB EFI), root (2.73 TiB btrfs), and /home (also 2.73 TiB btrfs). Clearly, root and /home are the same logical volume. But evidently I don’t yet understand how btrfs (sub-)volumes work, because I thought that each mount point would be a separate sub-volume… but either they’re not, or that information is not presented by Anaconda. They’re both listed as the volume “fedora_newcomputername”.

Additionally, all mount points are listed as being on both drives, or at least they “may reside” on either one. (The non-btrfs ones have the option to modify the choice of devices, and it’s the associated dialog that says “may reside”.) But /boot/efi and /boot are assigned as sda and sdb respectively—correct me if I’m wrong, but doesn’t that mean they’re on the HDD? When I manually assign them to the SSD, they become nvme0n1p*.

So anyway, the default option seems to be confirmed as your “terrible idea”, with the additional problem that the boot volume(s) may well be on the slower drive. No thanks!

I then tried manually creating each one; I went with the same sizes/filesystems for /boot and /boot/efi, but I made root a btrfs that took about half the SSD (just in case I do decide on a dual-boot setup). Here’s where I next ran into trouble:

  1. The button to modify device allocations is not available for a btrfs mount point. Instead, you have to modify the volume it’s on.
  2. I created another btrfs mount point to fill the HDD; I called it something like /home/data (not a real name). This is the “put documents and downloads and everything on the HDD” partition. But by default, this shared the same volume as root, and so it too was assigned to the SSD.
  3. So I added a new volume that was assigned to the HDD. Unfortunately I was on the root mount point when I did this, so it was assigned to this new volume. And when I changed it back to the original volume, the now-unused volume was removed. I had to create it again after switching to the /home/data partition.
  4. I also discovered that, when creating a btrfs mount point, the size you put in appears to be ignored. Instead, you have to change the volume size to “fixed” and specify the size limit in there. But I don’t know whether there are any repercussions to giving it a fixed size.

Anyway, I’ll have to come back to this later. Like I said, I’m out of time for now. But here’s a few notes/questions I’d already written.


Could you explain? I only know what I’ve read, which only seems to cover the how and the why, not the why not.

Yeah, that’s the impression I was getting. Which is unfortunate, because it seems like (a) there ought to be some best practices for a “typical” user, and (b) the current Anaconda defaults are definitely not it.

As far as I can tell from doing some searching and reading, overprovisioning (OP) is not needed on current drives. Manufacturers include some OP space by default, and you can add user OP space to that, but from what I’ve read, it’ll only be of benefit if the drive gets close to full (in which case the extra OP basically serves to stop you actually filling up the drive—it looks full but still has that extra space reserved).

I’m curious: where do you get your 25% figure from? Samsung’s own Magician software seems to default to just 10% user OP (though I can’t be sure without installing it, and it’s Windows-only). I can’t find any other information from Samsung about OP on consumer drives, but there’s a white paper recommending it for data centres (which have very different use cases anyway)—and it only shows results for up to 20% user OP!

I really don’t think that’s going to be a problem. As you say, even under your 4 TB/month load, current drives are rated to last for years, and I don’t think my write needs are going to be anywhere near as high as yours.

Well, naturally. :relaxed:

I always use the blivet-gui in anaconda for this. It has a more representative UI you that is more logical to me. Have you tried that one? It will show you the two btrfs partitions separately and allow you to create and mount subvolumes in each.

The installer will allow you to select a single drive for install even when using automatic partitioning. Having the system installed to span both an SSD and an HDD is a really bad idea.

I suggest you do a new install, and just select the SSD for the install. The HDD can always be added later and mounted for data where you are not concerned about drive access speeds.

That’s my next stop. Tomorrow. (Yeah, I’m still here. There’s a certain disconnect between “perey needs to go now” and “perey has actually shut down the computer”… :wink:)

regarding OP:

there: “What are the advantages of increasing OP?”

my understanding is:
even when the vendor set a “hidden” OP an additional user OP makes a benefit. when the ssds get filled you’re screwed cause the ssd writes degrades… [1]
I also have to mention that my NVMe (500 GB) was always half filled. I do not have so much data.
so I wasted my disk space a bit with my user OP…

[1]
and I currently don’t know if a simple deletion of files reverse the degradation or if I need a secure erase then. I haven’t been in such a position and even won’t provoke one.
the same goes for unnecessary writes e.g. under ~./cache

Yeah, that’s the white paper I meant! Their tests are “once data has been written over the entire [SSD]”, and only go up to 20% OP.

Right. So OP is basically protection against filling up your drive accidentally, yeah?

I’m no expert, so if I’ve got any of this wrong, please correct me.

  • The NAND cells in SSDs can’t actually be “overwritten” like good old magnetic media, in the sense that if there’s data in there, you have to clear it before you can write new data.
  • Clearing that data takes time—it’s not long, but it’s comparable to the time needed to write data in the first place, so it would more or less halve your write speed.
  • So, modifying a file in-place, or writing a new file to a previously used cell, would be slower than writing a file to an unused area.
  • The solution is to write everything to unused cells. When you modify a file, it makes a new copy with the changes.
  • This leaves old copies of data lying around. There are two parts to dealing with this:
    1. The file system marks the data as unused, without actually modifying it. This is basically what file systems have always done when you delete files (it’s why undeleting is possible).
    2. The SSD itself clears out the old data at some point. It’s smart and can do this when it’s otherwise idle.
  • The SSD needs to be told which cells are unused, though. Passing this information is called a “trim” (usually capitalised to TRIM for some reason). The file system can give a TRIM every time it frees up space (modifies or deletes a file), or else just send a bulk one once in a while. The latter option is the default; benefits of constant TRIMs are dubious.
  • If there aren’t enough empty cells to write data to, things get awkward. The SSD has to take the time to discard old data, thereby freeing up some blocks first. This is why writes on SSDs that are nearly full are slower—the “free” space is more likely to be peppered with chunks of old data.
    • Matters are complicated by the fact that only complete blocks can be erased, so things might slow down even further while a lot of data gets shuffled around (hello defragmentation, we didn’t miss you).
  • Overprovisioning makes sure there’s always a decent chunk of unused space. You’re not actually roping off part of the drive and saying “don’t use this”—the SSD itself will merrily swap blocks in and out of its reserve. You’re just leaving more space unavailable for active storage, so there’s likely to always be some nicely TRIM’med blocks sitting fresh and empty.

If by “secure erase” you mean the traditional “overwrite the blocks with zeroes or with random data”, then no, I wouldn’t think so. In fact that would just lead to even more writes going to the SSD, wouldn’t it? I think just deleting the files and making sure a TRIM is done would ensure that some space gets properly freed up.

NO !

secure erase for NVMe/SSD is something different, see:
https://wiki.archlinux.org/title/Solid_state_drive/Memory_cell_clearing

these device gets slower in any case over time even with OP set and “secure erase” resets them to nearly factory write speed.
So Samsung’s Magician let you create an USB Stick with “secure erase” evironment on it.

1 Like

Ahh, thanks. I didn’t know that!

But I notice this sentence at the top of the page contradicts what I’ve read elsewhere: “TRIM only safeguards against file deletes, not replacements such as an incremental save.” I thought TRIM was used for any invalid data—deletes, modifications, the whole lot?

Fedora graphic installer is perfect … two drives : Logical Volume Manager option, select both drives then user : just myself as administrator => continue and install, nothing else … result is EFI fat partition, boot partition and LVM2 partitions on both drives being block devices as root, swap and home : where, size, …?
The installer will select fastest drive as root, the other as home with dynamic allocation of space as needed. A lot of discussions about manually partitioning drives leading back to UNIX on PDP-11 device from DEC … update!

The default is for the filesystem to be BTRFS, and will use both drives as one volume unless you manually tell it how to separate the volumes.

The easiest way to keep things separate is to manually install (or even automatic) to only the SSD, then afterward configure the HDD as a separate volume and mount it at /home. Doing it this way will prevent having a BTRFS volume spanning both the SSD and the HDD.

In the past doing an LVM install had similar problems with spanning physical devices if you use more than one drive for the initial install.

Very careful config may allow the user to limit the volumes to a single device during the install, but it seems much simpler to do the install as one step to a single drive then do the extra drive config as an additional step.

I have not done the install on a bare system using LVM as I usually already have my /home volume configured and only need to mount it without defining a new filesystem there. Thanks for the update on how the installer is smart enough to identify the hardware and separate the use.

1 Like

This askfedora post may be helpful in how to do it in anaconda (fedora’s installer).

I think you’re right. This is probably the best advice for general use, and since that’s what I asked for, I’m going to accept this as the solution—unless someone wants to go one better by giving instructions on how to set up the HDD afterwards? Specifically, how to format it as btrfs and add one or more subvolumes, then put them in fstab so they get mounted automatically?

In this specific case, that’s not what I did, naturally. :grin: Per @Dalto, I went ahead with a custom setup in blivet-gui:

  • On the SSD…
    • The boot partitions, copied from the defaults: /boot/efi (600 MiB EFI) and /boot (1024 MiB ext4).
    • A 500GiB btrfs partition. That’s about half the drive, which leaves plenty of space if I do want to dual boot later. If I don’t, I can just resize it to take the rest of the space.
      • The default subvolume is mounted at /, and I added another subvolume for /home.
      • In hindsight I wish I’d set up another subvolume for /var. I’ve been reading up on the uses of subvolumes, like using btrfs snapshots for recovery or backup. OpenSUSE, at least, has / and /var separate so that rollbacks on the former don’t affect the latter.
  • On the HDD I just have one giant btrfs partition. Currently it has only the default subvolume, mounted at /home/shared. I’ve symlinked ~/Downloads, ~/Music and ~/Videos to directories in there.
    • If and when I set up dual boot, I’m going to give WinBtrfs a go. If that doesn’t work out, only then will I worry about adding an NTFS partition to share between OSs.

Comments? Remarks? Perey-you-idiot-you-overlooked-this-obvious-thing?

1 Like

What are you referring to as the default subvolume? Any subvolume can be set as the default. Do you mean you mounted the root of the partition at /? If so, this is a valid thing to do. It is how systems that use nested layouts work.

I prefer to not mount the root of the partition as / as I prefer a flat layout to a nested one. But that is a personal preference. I don’t like all my subvolumes to be available like that.

It is trivially easy to do this post-installation. In fact, you can make almost any change you want to your subvolume layout afterwards.

I think this approach makes very little sense in a btrfs filesystem. Make separate subvolumes for all those and then mount them where you want them. You don’t even need to mount the root of the partition. symlink’ing like that makes more sense when you are using a traditional filesystem. There is very little downside to have more subvolumes on a data disk like that. It is just more flexible. For example, do you really want the same snapshot policies for all those different types of data?

The default default, the top-level subvolume. (“A Btrfs filesystem has a default subvolume, which is initially set to be the top-level subvolume.”) Since I’m not doing anything fancy, I figured they were more or less interchangeable terms in my case.

I don’t really understand this whole “flat” vs “nested” thing. Or maybe it’s that I don’t get the relationship between subvolumes and the filesystem hierarchy. Can you explain it some more?

(Also, I didn’t realise that the top-level subvolume doesn’t have to be mounted at all, so thanks for that!)

EDIT: I may have answered my own questions, simply by poking around in Dolphin. So, I have two btrfs volumes—call them “Fast” (SSD) and “Files” (HDD). Fast has a subvolume for mounting as /home, called “FastHome”. In Dolphin, I can see both Fast and Files in the sidebar; and when I browse Fast, I see FastHome appear like any other directory.

So this is nested. If I’d called FastHome just “home”, it would be the /home directory. This happens automatically, without it having to have its own mount entry. (And then it’s possible to have sub-sub-volumes, like having “bob” inside “home”, which shows up as /home/bob.)

In a flat setup, there are no sub-sub-volumes—everything is a subvolume of the top-level volume, and the top-level volume isn’t mounted at all. This way you can’t browse into subvolumes except via their (explicit) mount points.

Is that all correct?

I’m aware it’s possible to change; I just don’t know how yet. :wink: So I wish I’d done it while I had the installer options there to do it for me.

Well, I can see downloads wanting a different policy to the others, sure. I did it this way because it made sense to me—which as you say is probably from knowing traditional filesystems better than I know btrfs.

I also did it because this way, I can share them between users. Can one subvolume be mounted at both /home/alice/Music and /home/bob/Music?

They are only interchangeable if someone knows you didn’t set a different default subvolume. In this case, that terminology is only valid in a conversation with yourself. :rofl:

That is a good summary, yes. The only thing that is slightly off is that in flat layout, you can mount the root of the partition. You have to mount it, at least temporarily, to create create new subvolumes.

In your setup, if you don’t care about the contents of /var, you can just delete /var and then sudo btrfs subvolume create /var. Since it is /var, you might need to boot off an ISO to do that since it is probably hard to delete /var otherwise.

If you are sharing them, then symlinks or bind mounts probably make more sense. I would still use separate subvolumes though.

1 Like