A recommend method to run a command at startup?

I didn’t exactly find a clear solution for this use case. I need to run a mergerfs command (requires sudo, to “merge” mounted drives) before the desktop environment starts, otherwise GTK apps will show errors. A @reboot crontab directive?

1 Like

A systemd service would be the most common way to run a command needing root access at boot time.


Ditto to what @dalto said.
You can also look at using /etc/fstab to do the mount.

1 Like

I suppose that this wouldn’t work since mergerfs (on my system) has two physical hard drives to appear as a singular volume under /mnt/media. “mount” may have been an incorrect term.

1 Like

a user scheduled job rather than a system schedule job (which would run as root).

I see, a system schedule job then!


That is a timer unit.

You would probably want a service unit or a mount unit depending on how mergerfs works.

It’s just this one-liner of a command:

sudo mergerfs -o allow_other,use_ino,cache.files=partial,dropcacheonclose=true,moveonenospc=true,category.create=mfs /mnt/media1:/mnt/media2 /mnt/media

Hmmm. You need 2 physical devices to appear as one device.
Mergerfs is one way, but to me it seems a prime candidate for use of raid 0, LVM, or btrfs volumes using the 2 devices. Then the system would see it as one device without needing to manage timing during boot.

mergerfs seemed the easiest way, I had it set up in 30 minutes without much experience. It’s important that one drive failing doesn’t cause all data to be lost. I do plan to add additional drives soon as well, won’t be a dual-HDD storage setup.

The drives are ext4-formatted, will see if reformatting to btrfs is wortwhile.

sudo mergerfs -o allow_other,use_ino,cache.files=partial,dropcacheonclose=true,moveonenospc=true,category.create=mfs /mnt/media1:/mnt/media2 /mnt/media

I think for fstab this would be something like:
/mnt/media1:/mnt/media2 /mnt/media fuse.mergerfs allow_other,use_ino,cache.files=partial,dropcacheonclose=true,moveonenospc=true,category.create=mfs,nofail 0 0


fstab is indeed supported:

To have the pool mounted at boot or otherwise accessible from related tools use /etc/fstab .

# <file system>        <mount point>  <type>         <options>             <dump>  <pass>
/mnt/disk*:/mnt/cdrom  /mnt/pool      fuse.mergerfs  allow_other,use_ino   0       0
1 Like

That statement leads directly to a suggestion for 3 or more drives using raid 5, or 2 or more larger drives using raid 1 or 10. Only those configs, AFAIK, are fault tolerant and do not lose all the data with a single device failure.

LVM, Raid 0, and BTRFS, spanning multiple devices (physical or logical) will usually cause all data to be lost with a single device failure

You’re not supposed to use sudo. You gotta fix the permissions first.
Considering your mount is correct, this should work

cat << EOF | sudo tee /etc/systemd/system/mergefs.service > /dev/null
Description=mergerfs mount

ExecStart=/usr/bin/mergerfs -o allow_other,use_ino,cache.files=partial,dropcacheonclose=true,moveonenospc=true,category.create=mfs /mnt/media1:/mnt/media2 /mnt/media


sudo systemctl daemon-reload
sudo systemctl enable --now mergefs
1 Like

Maybe it is worth to add, in order to avoid misinterpretations, that LVM and BTRFS can themselves both implement RAID on the software block level. So, it is not RAID or BTRFS or LVM. But it has to be clear that software RAID is never as reliable as hardware RAID (but the disadvantage of software also affects mergerfs). In terms of BTRFS, I am not sure how stable it is or if RAID 5/6 are already implemented (or even planned). But LVM has become mature over time, and it includes RAID 5/6.

Although it is already implicitly contained in JVs posts, I would like to elaborate that the goal of “two devices that appear as one” has nothing to do with fault tolerance. These are two different issues. If the space is just merged to create storage capacity and/or performance (such as in a default volume group in BTRFS or LVM, or in RAID0), there is no fault tolerance. But if the volume group applies RAID1/5/6, there is fault tolerance of <1 device (RAID1,5) or <2 devices (RAID6). The question is finally how the volume groups are merged/managed when they appear as one.

Anyway, compensating one failing drive implies RAID 1 with two drives or RAID 5 with 3+ drives. I have no experience with mergerfs.

Also, if no BTRFS is used (BTRFS is still not recommended for corporate/enterprise storages that contain critical data), I suggest XFS instead of ext4. Although the latter is no critical issue as well.

I don’t recommend using btrfs’ raid 5/6 support. If you want that, I would recommend using btrfs on top of lvm.

Agreed that both lvm and btrfs can do raid.
According to this and other things I have read I would never consider using either LVM or BTRFS to create the raid array. I would use mdadm raid arrays with LVM on top of that (I already do this with ext4) and if I used btrfs it would be on top of the lvm logical volumes. The link explains many of the disadvantages of using raid directly from either lvm or btrfs.

While software raid is not quite the same quality as hardware controlled raid there is a significant difference in cost since the controllers are often quite expensive and to the average user the performance differences are insignificant. For most users software raid using mdadm is quite adequate.

I would not consider btrfs for critical data storage. Not sufficiently mature, and questionable support on the long term. Even SuSE remains with XFS for critical storage, although BTRFS is its default for system partitions. I used LVM RAID1 for non-SSD hard drives a long time ago, it had to manage one drive failure over time. But I always kept additional backups. Currently, I only have snapshot-centered file/block solutions, no RAID on block level. Easier to maintain and to monitor.

I absolutely agree on your hardware/software argument. I just wanted to make aware of the considerations and issues around software implementations (there is much to evaluate when choosing one). The link you provided is another well example (LVM with RAID can mean two things with different advantages/disadvantages), although the article is a bit one-sided :wink:

I’m now mounting with mergerfs as follows in /etc/fstab:

# <file system>        <mount point>  <type>         <options                                                                                               <dump>  <pass>
/mnt/media1:/mnt/media2         /mnt/media	fuse.mergerfs allow_other,use_ino,cache.files=partial,dropcacheonclose=true,moveonenospc=true,category.create=mfs   0       0

Take great care with editing the file — I had one superfluous / character after either /mnt/media1:/mnt/media2 or /mnt/media and my computer didn’t boot (hanged at the encrypted disk login screen) as a result ( I used Fedora live media to remove the slash).

1 Like