This is Gentoo's testing wiki. It is a non-operational environment and its textual content is outdated.

Please visit our production wiki at https://wiki.gentoo.org

User:Fearedbliss/Installing Gentoo Linux On ZFS Notes

From Gentoo Wiki (test)
Jump to:navigation Jump to:search

Install Gentoo Linux on ZFS Notes

Contains snippets of information that can help or give a better perspective on different ZFS install scenarios.

6 Drive RAIDZ2 Pool w/ Separate /boot and swap

Drives: sda, sdb, sdc, sdd, sde, sdf

Partition Layout for Each Drive:

/dev/sda1 = 250 MB /boot (8300)
/dev/sda2 = 32 MB BIOS Boot Partition (EF02)
/dev/sda3 = 4 GB swap (8200)
/dev/sda4 = Rest of Disk - ZFS (bf00)

Enable Legacy BIOS Bootable on each of the /dev/sd#1 devices.

Save this layout:

sgdisk --backup=layout /dev/sda

Apply this layout to a drive (Repeat for each needed drive):

sgdisk --load-backup=layout /dev/sdb

Use mdadm raid 1 for the separate /boot and swap

/dev/md0 = /boot /dev/md1 = swap

mdadm --create /dev/md0 --raid-devices=6 --level=1 --metadata=1.0 /dev/sd[a-f]1
mdadm --create /dev/md1 --raid-devices=6 --level=1 /dev/sd[a-f]3

<note important>We made our /boot array use metadata 1.0 because extlinux only supports that one.</note>

Format the above block devices:

mkfs.ext2 -m1 /dev/md0
mkswap -f /dev/md1

Save the array information in /etc/mdadm.conf.

mdadm --examine --scan > /etc/mdadm.conf

Making the 6 drive RAIDZ2 pool:

zpool create -f -o ashift=12 -o cachefile= -O atime=off -O compression=lz4 -m none -R /mnt/gentoo tank raidz2 /dev/sd[a-f]4

Make sure to install mdadm inside your chroot:

emerge -ag mdadm

Installing extlinux onto your raid array

The following will install extlinux to each device in the array so that if one of your devices fails, extlinux will automatically attempt to boot from the next device.

mkdir /boot/extlinux
cd /boot/extlinux
extlinux --raid --install .

= Write the extlinux gptmbr firmware onto each drive

dd if=/usr/share/syslinux/gptmbr.bin of=/dev/sda

Repeat for each device in the array.

Limiting ZFS ARC size

In Bytes:

2576980378  - 2.4 GB (30% of RAM)
3006477107  - 2.8 GB (35% of RAM)
2147483648  - 2 GB (25% of RAM)
4294967296  - 4 GB (50% of RAM)
8589934592  - 8 GB (100% of RAM)
echo options zfs zfs_arc_max=536870912 >> /etc/modprobe.d/zfs.conf
Encryption

Optional: Securely Delete Drive

You could first securely wipe your drive if you don't want anyone to be able to get any old data that was available before the encryption. This can take more than 2 hours depending the size of your HDD.

time shred -n0 -v -z /dev/sda

Format your drives

Create your encrypted container for your root

We will be using a passphrase for this installation.

cryptsetup --use-urandom luksFormat /dev/sda4

You can verify that your drives were formatted and that they have a key by running the following:

cryptsetup luksDump /dev/sda4

Default encryption settings are: aes-xts-plain64 with 256 MK bits and sha1 hashing.

Mount your encrypted container

Before going further, try to decrypt your drive using your passphrase:

cryptsetup luksOpen /dev/sda4 root

You should see your root container if you check your /dev/mapper directory:

ls /dev/mapper/

Find /boot UUID and then edit fstab

blkid /dev/sda1 -s UUID | cut -d " " -f 2

Copy this UUID= line and put it in fstab

nano /etc/fstab

I removed the root, floppy, and cdrom lines. Our root is on zfs so we don't need it to be here. Our swap is encrypted but will be handled by systemd using crypttab, and our boot is /dev/sda1 (but we will use /dev/sda1's UUID). So we only need to record our /boot partition in /etc/fstab.

My fstab looks as follows:

UUID="4443433f-5f03-475f-974e-5345fd524c34"               /boot           ext2            noatime         0 0

Create your zpool

Create your zpool which will contain your drives and datasets:

zpool create -f -o ashift=12 -o cachefile= -O compression=lz4 -O normalization=formD -m none -R /mnt/gentoo tank /dev/mapper/root

Add "cryptsetup" and "udev" use flags to make.conf

Edit your /etc/portage/make.conf again and add "cryptsetup" and "udev" to the USE variable. Then run the previous command again. We didn't do this before because there would have been a circular dependency with the lvm2 package.

NOTE: This is important. If you don't re-emerge your stuff with the cryptsetup flag, systemd will not have supported for your encrypted drives and you will have problems at boot.

Edit /etc/conf.d/dmcrypt (openrc)

First get the PARTUUID for your swap partition and then add it into your dmcrypt file. Every time you start your machine, this file will be read and your swap partition will be re-encrypt using /dev/urandom.

blkid /dev/sda3 -s PARTUUID | cut -d " " -f 2
PARTUUID="5adf8e6f-fefb-4587-b585-737eaa397c2a"

Then edit /etc/conf.d/dmcrypt and add the following:

swap=swap
source='/dev/disk/by-partuuid/<PARTUUID>'
options='-c aes-xts-plain64 -s 256 -d /dev/urandom'

Edit crypttab (systemd)

First get the UUID for your swap partition and then add it into your /etc/crypttab. Basically every time you reboot your machine, systemd will read this and it will re-encrypt your swap partition at boot with a different password using /dev/urandom.

blkid /dev/sda3

Find the PARTUUID and copy it down. Then edit /etc/crypttab and add the following:

swap PARTUUID="[uuid you copied down]" /dev/urandom swap,cipher=aes-xts-plain64,bits=256

Configuring bliss-boot

nano /etc/bliss-boot/config.py
blkid /dev/sda4 -s UUID | cut -d " " -f 2
	
('Gentoo', '3.14.26-KS.01', 1, 'vmlinuz', 'initrd', 'root=tank/gentoo/root enc_drives=UUID=[your /dev/sda4's UUID] enc_type=pass quiet'),

Final Steps

rc-update add dmcrypt boot