Help an on again/off again Linux user take the plunge

So I’ve screwed around with Linux off and on for many years but always stuck to distros that essentially ensured everything would work OOB. I’m going to change that this time and push my own boundaries. So I think I"m going to compile Arch and the kernel from source and add in all the packages I want and leaving anything else out. I’m certain things will be broken at first and that’s OK. I’ll have a month to build things before the laptop needs to be ready for school.

What I want help with is deciding on the various decisions I will need to make. I think I want to use KDE as my DE and ZFS as my FS. There are however some terms I know in passing but don’t understand like Wayland vs Xorg (and Sway?). If people can point me in the right directions, I’m fine with reading. The Arch wiki has been helpful already.

Which DE handled HiDPI the best? What pitfalls has the Linux crowd run into with this hardware?

You sure about that? I use ZFS on my FreeNAS and while it’s bulletproof, it’s not exactly forgiving of minor configuration errors and will happily eat your data if you’re not careful. Don’t mess with it…

It’s also extremely memory-intensive, being a copy-on-write filesystem. If it hasn’t flushed its cache before a shutdown or loses power before it can complete its writes, your data is corrupted.

And then there’s the need for scrubbing.

It’s great for servers attached to UPSes or drives attached to battery-backed HBAs, but personally I would never be comfortable with it on a laptop.

If you wanted to experiment around with a cool filesystem that’s not plain old vanilla ext4, try btrfs.

I may be speaking out of turn here, there could be ZFS experts that will think this is nonsense. But I have learned to treat ZFS with respect, and in turn it has kept my data safe for almost 10 years. It just doesn’t seem like a great desktop/laptop file system, particularly with only one drive in the pool.

2 Likes

Not at all sure actually! I’m open to btrfs and I would love for you to elaborate on your experience with ZFS as mine is well… non-existent really. I’m looking to spin up my own server as well using FreeNAS SCALE but I decided to shelve that project in favor of Framework (only so much money to go around).

I know I wouldn’t be able to make full use of ZFS’ feature set but I like the idea of ZFS correcting any silent data corruption. I intend on purchasing that EG200 eGPU enclosure at some point and installing an SSD along with the GPU. Every time I would dock, I would want a snapshot or other system backup to commence.

Obviously I can’t use ECC but I think the CPU horsepower is there for ZFS and with 32GB of memory, I think I will have enough cache without resorting to swap.

1 Like

This thread title spoke to me, heyo

tldr long journey of mine: through a lot of iteration, I landed on Fedora/Sway. Sway is a tiling manager for Wayland – it’s a clone of i3, which is for Xorg.

There’s plenty of literature out there on Wayland and Xorg – Wayland is supposed to replace Xorg. It still has its rough edges, but it’s mostly there. Mainly, IME, getting screensharing to work may take some effort. XWayland provides backwards compatibility for X apps to run on Wayland. Keep in mind you can have both Xorg and Wayland installed to experiment/switch between the two. And well, you can do that with DEs as well. Install Gnome, KDE, i3, Sway, etc. all at once and try them all out for yourself :slight_smile:

I’ve been using btrfs with snapshots, so I can vouch that. Wayland and btrfs are Fedora defaults.

My unsolicited advice would be:

Just start :melting_face: One step at a time. Don’t overwhelm yourself. Little by little. The journey will probably be a long one as the rabbit hole can go very deep (but it’s worth it!). Since you have a month, and I’m assuming you need something stable for school, I’d say get to something stable in 2 weeks and then have 2 weeks to daily drive your setup to ensure stability. And please make sure you have backups and practice said backup strategy beforehand, especially if you’re prone to tinker.

Pitfalls on this hardware specific to Linux:

  • Currently? None, really. IME all the hardware just works :dancer:
    edit: Oh wait, I kinda lied/forgot. There are pitfalls to this device that may skew towards Linux, ahem: mainly around battery/sleep, but that’s improved dramatically.
    This is still a thing, though: Function (Fn) Keys Sticking + Fix for Linux

Linux software pitfalls:

  • Browser VAAPI video/hardware decoding support is still shoddy. That’s on browser maintainers, though. VAAPI hardware decoding works through mpv, VLC, etc.

Wayland pitfalls:

  • Browser/Electron app support is scattered.
  • Screensharing support is scattered.

GLHF! And if you need something specific answered for Fedora/Sway/this laptop, lmk. I have mostly everything detailed in a document (that I need to find the time to cleanup to share). shrugs

4 Likes

Last I heard the recommendation was not to use ZFS on single disks, because you miss out on a lot of the data security features/techniques it uses/provides.

I personally am still not happy with BTRFS, I don’t think its mature enough yet. its only recently had its experimental status lifted IIRC.

1 Like

I can only relate my experience of ZFS via FreeNAS/TrueNAS which is built on FreeBSD. I built my server in 2013-2014 and the recommendations back then were 8GB minimum. Back then that was a decent amount. To be more comfortable and increase performance I went for 16GB. That was a very decent amount and in ECC, fairly expensive.

ZFS will use as much memory as it can to increase performance, it uses it for ARC (Adjustable Replacement Cache). The more memory it has, the faster it runs. Think how fast something would run if it only read or wrote to/from memory. As your ARC gets bigger, approaching the size of your drive pool, there’s a higher and higher chance that the recently-accessed file you want is in ARC (the ARC hit ratio approaches 100%).

This is my lightly used server with a “warm” ARC that’s “used to” my data access patterns. The only real work it did was an incremental backup at midnight, you can see the drop in the hit ratio as it’s forced to fetch a little data from disk.

This is all fine and good for servers, but keep in mind servers don’t have really intensive programs to run. They devote as much resources as they can to disk I/O and networking. It’s highly unlikely they would run a single program that would consume > 2GB of RAM. But on a desktop/laptop, even a web browser can consume more than 2GB of RAM. Add, say, gaming or a VM to that and suddenly you won’t have enough RAM left for a decent ARC. Performance can suffer.

Sure 32GB is a lot, but ZFS will easily consume all of it to increase performance. Servers often have 128 or 256GB of RAM, ZFS will eat as much of that as it can as well. Also, although compression would work, you probably shouldn’t use deduplication because it’s can be very CPU and memory intensive for very little benefit - a slight saving in drive space. Since drive space is cheap vs. CPU power and RAM capacity, just get a bigger drive and suffer the consequences of a few more GB used on the drive.

In regards to error detection/error correction, yes, ZFS can detect it. But without at least a second disk in a RAIDZ array there’s nothing it can do about it. It will detect it but cannot correct it.

A neat feature of ZFS is that it’s both a drive format and a software RAID controller. With only one disk though, there’s not much advantage. You get snapshots and ARC but no real self-healing error correction since you have no parity data on a single disk. I would not use ZFS on a single-disk desktop or laptop because you won’t have these advantages. Even two disks is not enough, you really need 4+ to use RAIDZ2 - if one of the disks fails, the possibility of another one failing while the pool rebuilds (a very intensive process) is not inconsiderable, and if a second drive fails in a RAIDZ you lose your data! You will have none of this protection on a laptop, not even with 2 disks.

ZFS is still in active development and every once in a while there’s an update. There’s always a warning that when you update, the flags on the drives get altered and there’s no going back - your data would be irretrievably lost if you went back to an older ZFS version that didn’t recognize these new flags. It seems scary but I’ve never had an issue.

I don’t know as much about btrfs. It supports snapshots and automatic backup of those snapshots, which is very neat. It can also detect errors because it checksums every data block as well as the metadata, but without a second drive it’s limited in what it can do. Like ZFS, it’s a copy-on-write filesystem which increases performance at the expense of memory, but it can corrupt data if the cache is not flushed before it loses power, so a sudden unexpected shutdown could be catastrophic to your data - if it hasn’t written the drive metadata properly, the whole drive could be corrupt.

I’m sticking with boring old ext4 and backups for now though. Being so old, it’s quite stable. It does not incorporate modern filesystem ideas like versioning and file checking though.

ZFS on FreeBSD is old and stable, ZFS on Linux and btrfs is much newer and although the developers would call them stable, neither one has the long-term track record yet to make everyone fully comfortable with it.

Still, btrfs seems more suited to a laptop than ZFS is.

2 Likes

If you want arch, EndeavourOS is the easy way in. The Apollo release and graphical installer had zero issues for me the last time I used it.

I went ext4 for my FS, just for simplicity, and plenty happy with it as always.

Wayland seems to be having some ugly regressions and teething problems lately. Had to go X11 for nVidia GPU compatibility.

KDE has come a long way to give a lot of functionality for not a lot of overhead. It handles hidpi okay, but not fantastically. Gtk and python based programs still seem to have a mind of their own, despite me trying GUI and text config to get them to be reasonable sized on F.w internal display and my huge 1080p external monitor…

For the auto backup on EG200 dock, I would use a self rolled bash script based on rsync and a systemd trigger to run it, cause I’m old school and basic like that lol.

3 Likes

if you don’t quite get the difference between wayland and xorg you should probably do it the other way round, i.e. install something standard (arch+gnome or kde via archinstall is fine) and then maybe get rid of stuff you don’t need. personally if you want to use arch i’d recommend to use arch instead of derivatives. it took me 30 minutes to fuck up a manjaro setup in a vm i was trying out, after installing some stuff from the aur, you don’t want that to happen to you.

as for the fs, if the hard drive in your laptop is your single point of failure and you really care about data corruption (terrible idea, just do regular backups or use syncthing with other devices) then yes i can see why zfs or btrfs could sound appealing, but do you really want to mess things up with your filesystem? just stick to ext4 and don’t bother. keep in mind that zfs and btrfs won’t save you from hardware implosion, so they are not really replacements for backups.

as for hidpi and desktop environments. i’m running vanilla gnome at 1.5x scaling on wayland, which is the “default” option with gdm. you can’t reliably use fractional scaling on x11. on DEs, you can use sway or other minor ones if you want but there are always tradeoffs. someone in the forum mentioned that sway might even use as much power as gnome so the performance argument becomes a bit void.

a couple of notes on my setup:

  • all hardware works, including the fingerprint reader and the ambient light sensor.
  • browsers (firefox, chrome) and electron apps (slack, vscode, spotify) require ad-hoc configuration for scaling. at the time of this writing slack is still blurry while everything else i’m using is fine.
  • qt apps are weird. vlc has blurry fonts regardless of startup flags while telegram runs fine with QT_QPA_PLATFORM=wayland.
  • jetbrains apps don’t support wayland natively and are also blurry.
  • i mitigate all of the above using a 1440p external screen which of course has added benefits (a desk and so on) but it isn’t a solution, of course.
  • steam works fine and as do games, including proton. you might not have ray tracing but even some recent games run quite well.
  • you need to enable gpu acceleration in your browser in order to preserve battery life. firefox supports gpu acceleration fully but you need to activate it manually and it implies editing the .desktop file plus a couple of flags to customise (at the time of this writing).
  • chrome’s gpu acceleration support is hit and miss.

oh yeah. skip compiling your own kernel for the first setup. maybe do it later :slight_smile:

1 Like

This alone is enough to convince me not to go with ZFS as my FS. I have read elsewhere that ZFS RAM requirements are overblown and I believe that. I think ZFS was designed to use all available RAM as unused RAM capacity is wasted capacity. That does not mean that ZFS can’t run with less RAM and run efficiently but that’s neither here nor there. ZFS won’t do anything for me without running multiple disks. ext4 has snapshot ability I think and that will be sufficient for the cause. I might also try btrfs if I can properly utilize it, I’ll look into it further.

Perhaps I’m reading this in a way that you didn’t intend but to me this comes across as gatekeeping. Reading through your post it’s clear you didn’t read all the way through the thread as I stated how I was going to do backups below the main post.

This is another quote can come across poorly. It sounds like you are telling me to not try anything because it will be too advanced anyways and I’m incapable of learning.

This was helpful. Thank you. Perhaps I interpreted the tone wrong of your post and if so, I’d like to hear more about your experiences with Linux and the Framework laptop.

I worry about causing weirdness by doing that but I would love a copy of that document you created, regardless of how messily formatted it is!

Yeah… :upside_down_face: I like to tinker a fair bit so backups are a must. If I can’t get my configuration stable before school starts I’ll likely install Windows while I tinker on an external drive so I can keep school assignments safe in case I complete screw up.

no it’s not about being able to learn anything it’s about whether it’s even worth learning it. since you are more likely to lose data as a hardware fault (no redundancy etc.) the benefits of using zfs / btrfs to me aren’t worth the hassle.

1 Like

Perfectly reasonable stance to take

1 Like

One other tip I learned a long time ago and have been using ever since: Install the OS to one partition, create a second partition to hold your real /home, create a third partition for a second “stable” OS, and an optional swap partition if you want that. Then, symlink important folders in home partition to each OS partition’s /home/. So example /home/ghostlegion/Music would symlink to /media/home/ghostlegion/Music on the home partition . And make sure to automount /media/home using /etc/fstab entries. This allows you to reinstall an OS without blowing up your home directory. Having the second stable OS copy means you can fix the bleeding edge OS partition if you screw something up.

The trick is knowing what can be safely symlinked in from both OS’s… Some programs will complain about version mismatches, some config files really shouldn’t be shared. But just basic documents, music, etc are definitely no problem.

I also highly recommend replacing grub with rEFInd for your bootloader, especially in a two OS setup…

3 Likes

i used to do that. personally i think it’s useful if one has only one computer. if one has access to a NAS / backups and an easy way to restore a system, having more partitions is more of a hassle than anything else. the way i do it is symlinking the dotfiles i really need to a folder that i keep in sync across multiple devices (besides of course documents etc.).

besides, if both the root and home partitions are encrypted (which, i mean, yes of course) it easily adds complexity. it is still doable of course, but it gets more annoying.

1 Like

I’ve done a variation of this where I keep a copy of the live ISO on a dedicated partition and then add an entry to the bootloader for booting the aforementioned ISO. As a side-bonus, it keeps disk space down as well.

It also makes it easy to bug-test things on real hardware in a clean, vanilla configuration, and you’ve also then got the ISO readily on-hand for use in VMs.

Of course, if you’re actually compiling your own installation then this method isn’t exactly practical.

As far as I know, ext4 doesn’t have a native snapshot feature in the same sense that ZFS does. ext4 can use LVM snapshots, but that operates at a lower (block) level, not filesystem. ZFS snapshots are native to the filesystem itself, so you can do things like snapshotting a live filesystem (without freezing the filesystem).

btrfs however does have this support. I’ve always shied away from it due to the rapid development and lack of good recovery tooling when I checked it out. The general saying around it was along the lines of “it worked perfectly until the day it didn’t and it ate my data”. Of course, everyone should have backups, etc etc. But it’s still a hassle. :slight_smile:

I will say, none of these filesystems provide self-healing abilities with a single disk, mostly because it would be somewhat pointless (if a drive is exhibiting read errors, it’s likely to only get worse, and parity data can only do so much). At that point I’d consider stability, speed, and tooling, which have traditionally led me to use ext4 or XFS (depending on the device, see here for some more info) for most of my systems, and ZFS for my NAS and some of my workstations.

ZFS does like to eat RAM when configured to, but those are largely performance optimizations to reduce the amount of disk reads necessary to return data. When there’s memory pressure or not much memory in the first place, it’ll simply hit the disk instead of cache. Simple as that (as you’ve noted). I think the bigger strength in ZFS’ snapshots is that they can be sent and synced between devices. So, for example, setting up backups to a NAS can be as simple as creating a snapshot on your machine, then sending it via ssh like this:

LATEST=$(ssh backup-server zfs list -t snapshot -o name -s creation -r backups/workstation | tail -1)
NOW=$(date +"%Y%m%d_%H%M%S")
zfs snapshot -r zpool/home@backup-$NOW
zfs send -R -i $LATEST zpool/home@backup-$NOW | ssh backup-server zfs recv -du backups/workstation

Critical to note here is that this is an incremental backup; no need to send unchanged data. I don’t know that doing something like this is possible with ext4/LVM, at least not without specialized tooling to send block-level incremental diffs.

If this is something that appeals to you, btrfs and ZFS are going to be your best bet. ZFS also has support for filesystem-level encryption and btrfs doesn’t, which you may or may not care about. The biggest win there is interoperability with BSD systems and the fact that you can selectively encrypt subvolumes.

All this to say: there’s good reasons to choose ZFS beyond the traditional parity stuff. :slight_smile:

2 Likes

In what way is this different or perhaps better than LUKS? Is it more performant? Or is it because ZFS is aware of encryption and other filesystems are unaware?

How is this better or worse than filesystem level? Again is it related to performance or does it enable greater selectivity in what gets backed up?

It’s different than LUKS in that it’s a native property of the filesystem itself, which allows for doing things like encrypting subvolumes. The performance characteristics vary depending on your workload, really, since ZFS and ext4 also behave differently as filesystems. Worth noting that ext4 apparently does have support for encryption natively through fscrypt. You can do neat stuff like encrypt per-user using a user password, which I think is particularly nice.

As for LVM vs filesystem snapshots, the main difference is that LVM snapshots are basically disk images, which means backing them up incrementally is a bit of a pain and requires special tooling. The other (more noticable, imo) thing is that you can’t browse the snapshots in the same way ZFS and btrfs allow you to. They both let you represent snapshots as a directory on the filesystem, so you can browse them in a Time Machine style directory. This can be useful for grabbing files off an old snapshot in a jiffy… Conversely, those snapshots are limited to the filesystem, so you can’t do something like snapshot multiple filesystems simultaneously (if you have multiple volumes in the LVM volume group).

There are good reasons to use either or. I’d suggest you try some benchmarking yourself with various configurations to figure out what’s best for you. For what it’s worth, I opted for LUKS + LVM + ext4, as I wanted to set up TPM-based unlocking and support encrypted swap, so LUKS + LVM made the most sense. I want to use a vanilla kernel, so that restricts me to ext4, XFS, and btrfs.

Pretty much all of the features I’ve mentioned can be set up using any of these methods (even the integrity checking can be done at the LVM level using dm-integrity). It’s really about what works best for your use case and how easy it is to set up for your distro.

1 Like

@reanimus Ok, follow-up question time. It seems to me that it should be possible to use LUKS for encryption and follow this guide on setting up FDE, TPM, and Secure Boot while using BTRFS for snapshots/backup. If BTRFS can detect data errors, even if it can’t correct them, I should be able to navigate to the backup and restore the file to an uncorrupted state. Does this sound right to you?