RAID 1 and Debian 10

Recently I decided I wanted my own storage server again (and not a NAS). In the process, I wanted to ensure I’d have something that could handle hardware failures fairly robustly (I am hosting a lot of stuff at this point) and also have slightly more expandable storage than 4 bays in a RAID 5 array.

Sourcing hardware isn’t super hard any more (even if you’re trying to avoid Amazon a bit) and with Ryzen supporting ECC RAM by default, we can even ensure we’re fairly well protected from bit rot, but ensuring the RAM you oder is actually ECC? Well that’s another story. I ended up on Memory4Less to find actually reliable memory.

The major challenge was ensuring the OS drive would be a software RAID 1. md arrays are fairly easy to create, but making them actually bootable is another story.

I’ve lost the specific information on how I got the MD array working, but it essentially comes down to having a copy of the EFI partition on both drives. What notes I have left are here:

https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/ https://implement.pt/2018/08/uefi-via-software-raid-with-mdadm-ubuntu-16-04/ https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=925591

Once the bootable RAID volume was configured, I needed two more things:

  1. Bonded network interfaces (to give me more local network bandwidth)

    • Here’s the config in /etc/network/interfaces:
    auto bond0
    
    iface bond0 inet static
        address 192.168.2.41
        netmask	255.255.255.0
        network	192.168.2.0
        gateway 192.168.2.1
        slaves	enp4s0	enp5s0
        bond-mode 6
    
    • Importantly, bond-mode 6 creates a “balanced” load between both NICs.
  2. Creating a btrfs volume out of my storage disks.

    sudo mkfs.btrfs -d raid1 -m raid1 /dev/sda /dev/sdb /dev/sdc /dev/sdd