Hi all!
I have a Debian stable server with two hdds in a md RAID which contains an encrypted ext-4 filesystem.
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 1G 0 part
│ └─md0 9:0 0 1023M 0 raid1 /boot
├─sda2 8:2 0 2.7T 0 part
│ └─md2 9:2 0 2.7T 0 raid1
│ └─mdcrypt
│ 253:0 0 2.7T 0 crypt /
└─sda3 8:3 0 1M 0 part
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md0 9:0 0 1023M 0 raid1 /boot
├─sdb2 8:18 0 2.7T 0 part
│ └─md2 9:2 0 2.7T 0 raid1
│ └─mdcrypt
│ 253:0 0 2.7T 0 crypt /
└─sdb3 8:19 0 1M 0 part
I’d like to migrate that over to BTRFS to make use of deduplication and snapshots.
But I have no idea how to set it up since BTRFS has its own RAID-1 configuration. Should I rather use the existing MD array? Or should I take the drives out of the array, add encryption and then add the BTRFS RAID inside that?
Or should I do something else entirely?
If I had to do encrypted btrfs RAID from scratch, I would probably:
btfs device add /path/to/mapper /path/to/btrfs/part
btrfs balance start -mconvert=raid1 -dconvert=raid1 /path/to/btrfs/part
In that scenario, you would probably want to use a keyfile to unlock the other disc without rentering some password.
Now, that’s from the top of my head and seems kinda stupidly complicated to me. iirc btrfs has a stable feature to convert ext4 to btrfs. It shouldn’t matter whatever happens outside, so you could take your chances and just try that on your ext volume
(Edit: But to be absolutely clear: I would perform a backup first :D)
Thanks, sounds good. I need the running system, so I’d first set up BTRFS on one disc, test it and then add the other disc.