Help an on again/off again Linux user take the plunge

If the backup is not on the same disk where the failure occurred, yes.

Given the way snapshots are implemented, if a given file hasn’t changed, the snapshot doesn’t store a separate copy of it. That is to say, if the file that got corrupted is a file that hasn’t changed in a while, the snapshot’s copy will also be corrupt. If it’s a file that changes often, the snapshot will point to a different file and may be recoverable.

That being said, errors are usually due to one of two things: bit rot or drive failure. Bit rot is a one-off but incredibly rare in drives that are regularly used, whereas drive failure generally means there are going to be more errors on the way. If btrfs is indicating an error, it almost always means the drive is going to fail soon and needs to be replaced.

1 Like

this is a crucial point. i just want to stress again that relying on the filesystem for “data integrity” on a laptop is a terrible idea :slight_smile: the priority should be 1. backups 2. (many other things e.g. don’t get it stolen, don’t leave it in places, don’t sit on it or put it in an oven, etc.) 3. use a filesystem with snapshots.

1 Like

@marco As I stated in the first few posts, I’ll have an eGPU dock that also has 3.5" drive bay so I’ll be backing up regularly to an external drive. I’m not really that ignorant.

Honestly, the bigger utility of local filesystem snapshots is to protect you from user/software error, rather than hard drive failure. With snapshots of the root filesystem in place, you can apply software updates or try config changes without worrying about how to undo it; simply try your update/changes, and if it breaks, roll back to the snapshot.

Similarly with home snapshots, you can retrieve files you’ve accidentally deleted/corrupted.

1 Like

@GhostLegion it’s not about ignorance. like @reanimus is saying, snapshots serve a very different purpose from backups

1 Like

To be fair, I think it’s also common to conflate the two if only because Time Machine is one of the most well-known and widely deployed backup solutions and it offers both snapshotting and backup functionality. But yes, the snapshotting and the backup serve different purposes. Snapshots protect you from user error, software bugs, and general hardware failure (i.e. power loss, CPU reset, etc). Backups protect you from loss of storage (i.e. hard drive dies, virus eats your drive, or you just plain lose it).

1 Like

Since you and @reanimus had to explain the difference, I suppose it is. Consider me more educated now. I’ll try to avoid conflating the terms in the future. I want protection from ransomware as well as the benefits previously mentioned.

1 Like

Snapshots should (in most cases) provide ransomware protection, assuming the ransomware doesn’t take the effort of wiping snapshots, as it would be similar to rolling back a config/corruption change.

Backups provide protection in either scenario, as you could simply wipe the drive and restore from a backup, though some ransomware may be programmed to wipe network mounts and the like as well.

i mostly meant FS-based snapshots as opposed to the generic concept of snapshots. my point is about the false sense of security that btrfs might give over backups, as yes, probably ransomwares and the like are a good reason to use btrfs, but they don’t affect a lot of people, while a lot of people experience hardware failures or thefts.

Has there been a successful multi-system ransomware attack in the Linux ecosystem? Last time I looked there had not been, but maybe someone knows differently…

Do you mean something like LockBit Linux-ESXi?

Or do you mean publicly known attack incidents?

EDIT: @GhostLegion don’t listen to me–I’m a nincompoop.

On top of what others have pointed out, I just wanted to mention that AFAIK the integrity-checking powers of BTRFS/ZFS rely on these filesystems having direct access to the bare metal of the drive. In other words, adding a LUKS layer of encryption between the filesystem and the drive effectively neuters the bitrot detection.

Nonetheless, I’m using LVM+LUKS+Btrfs with encrypted root/boot/swap and hibernation. Btrfs snapshotting is a lot more flexible and elegant than LVM snapshots, IMO.

1 Like

No? This isn’t true at all. The integrity checking works by reading the data from the underlying device and computing a checksum based off of it, then comparing it to the stored checksum in the filesystem metadata. This works whether the filesystem is being written directly to disk or being encrypted in the interim.

By default, LUKS will hide certain capabilities the block device has from upper layers (the usual example being SSD TRIM support, for security reasons) but this can be disabled to allow btrfs/ZFS to use those capabilities as well. This has nothing to do with its ability to check data integrity, though.

The only real difference I’d say integrity-wise between LUKS and non-LUKS is that if an encrypted disk has a single bit flip with LUKS, then it will result in a varying amount of incorrect bytes being read from the disk (i.e. 1 encrypted bit flip = multiple decrypted bytes being incorrect). This StackExchange answer suggests it’s usually around 16 bytes getting corrupted, depending on the LUKS crypto parameters.

2 Likes

Yup, thanks for the correction. I guess I was playing the game Telephone with some piece of FUD I’d heard about btrfs some time ago.

2 Likes