This is Gentoo's testing wiki. It is a non-operational environment and its textual content is outdated.

Please visit our production wiki at https://wiki.gentoo.org

User:Maffblaster/Drafts/Advanced ebuild testing guide

From Gentoo Wiki (test)
Jump to:navigation Jump to:search

This guide provides instructions for users to create their own chroot test environments. This is helpful for those who would like to test their ebuilds in a basic, pollution free environment.

Introduction

Chroots have been around for a long time. Apparently the first "chroot" system call was introduced in Version 7 Unix in 1979.[1] They are essential to the Gentoo installation process (those who have followed the Gentoo Handbook have worked in a chroot environment). Originating with the development of Gentoo the term "stage tarball" was created to help the Gentoo Release Engineering team define what tasks still needed completing in a chroot environment. Read up on the four levels of stage tarballs in the stage tarball article.

Configuration

Official Gentoo Stage3 tarballs

The officially generated stage3 tarballs from the Release Engineering project are perfect specimens to use for creating new chroots, they can be generally obtained from the following links:

Architecture Feature Download link
amd64 Multilib http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64/
x86 i686 http://distfiles.gentoo.org/releases/x86/autobuilds/current-stage3-i686/
Tip
Depending on how many chroots are created, they can really start eating up disk space quickly. It is a good idea to enable deduplification when using a filesystem that supports it.

For the remainder of this guide, it will be presumed the reader will be using the btrfs filesystem for the partition containing the ebuild test chroots. If btrfs cannot be used any Linux friendly filesystem will work. Space is cheap these days!

After the current stage3 files have been downloaded, create a nicely laid out directory structue and a couple of subvolumes as a target for the tarballs to be extracted:

user $mkdir -p ~/chroots/base
user $sudo btrfs subvolume create ~/chroots/base/amd64
user $sudo btrfs subvolume create ~/chroots/base/x86

Now extract the tarballs to the appropriate directory. Be careful to preserve extended filesystem attributes and access control lists:

root #tar --extract --bzip2 --preserve-permissions --xattrs --acls --verbose --file /path/to/downloaded/stage3-amd64*.tar.bz2 --directory ~/chroots/base/amd64 .
root #tar --extract --bzip2 --preserve-permissions --xattrs --acls --verbose --file /path/to/downloaded/stage3-i686*.tar.bz2 --directory ~/chroots/base/x86 .

Now that the base chroots are created, this may be a stopping point for some readers. If the ebuild(s) that will be tested are not specific to a certain desktop profile only a few more steps are needed. Jump down to Mounting the Portage tree.

Snapshotting system profiles

Those who are testing ebuilds with graphical components have a bit more work to do in order to prepare a sound test environment. It is time to create snapshots for the relevant system profiles. Suppose the ebuild(s) being tested run on the GTK graphical framework. It would be wise at this point to create a couple more base snapshots; this time with the Gnome desktop in mind:

user $mkdir -p ~/chroots/base/desktop/gnome
user $sudo btrfs subvolume snapshot ~/chroots/base/amd64 ~/chroots/base/desktop/gnome/openrc
user $sudo btrfs subvolume snapshot ~/chroots/base/amd64 ~/chroots/base/desktop/gnome/systemd

Mounting the Portage tree

Next, these snapshots will need to be updated, but before that can be done the host machine's main Gentoo repository must be shared to them:

user $sudo mkdir -p ~/chroots/base/desktop/gnome/openrc/usr/portage
user $sudo mkdir -p ~/chroots/base/desktop/gnome/systemd/usr/portage
user $sudo mount --rbind /usr/portage ~/chroots/base/desktop/gnome/openrc/usr/portage
user $sudo mount --rbind /usr/portage ~/chroots/base/desktop/gnome/systemd/usr/portage

Chroot into each location using pychroot (dev-python/pychroot) and make sure each are up-to-date:

user $sudo pychroot ~/chroots/base/desktop/gnome/openrc/usr/portage
user $sudo pychroot ~/chroots/base/desktop/gnome/systemd/usr/portage

Be sure to source /etc/profile after the chroot!

root #source /etc/profile

Finally, eselect the accurate profile name (base this on the snapshot's location) and rebuild the @world set. For example, for the deskop/gnome/systemd snapshot:

root #eselect profile set default/linux/amd64/13.0/desktop/gnome/systemd

Mounting the test repository

The final step before complete testing is to share the testing repository to the chroot. This is done quickly and easily by recursively bind mounting the repository and then creating a repos.conf entry for the repository in the test chroot.

Open a terminal outside the chroot:

user $sudo mkdir ~/chroots/base/desktop/gnome/systemd/usr/local/overlay/test_repository
user $sudo mount --rbind /path/to/test/repo ~/chroots/base/desktop/gnome/systemd/usr/local/overlay/test_repository
FILE /etc/portage/repos.conf/test_repositoryCreate a simple repos.conf entry
[test]
location = /usr/local/overlay/test_repository
sync-type = git
sync-uri = <enter_URL>
auto-sync = no

Start testing ebuilds!

Custom stage3 tarballs

Have a currently running system that would be nice to use a test environment? Use the tar command to compress it into a stage3 or stage4 tarball, just make sure to label it appropriately:

root #Coming soon...

Snapshotable filesystems (btrfs, zfs)

When using a filesystem that has the capability to create snapshots, it is possible to quickly generate chroot test environments.

Btrfs

To make a 'chroot' snapshot of the currently running system with btrfs, issue:

root #btrfs subvolume snapshot <root subvolume> <destination location>

Then simply run the mount commands for the appropriate virtual filesystems.

Zfs

If you want to take a snapshot of a single dataset in zfs you would do the following:

root #zfs snapshot <dataset>@<label you want to name this>

If you want to take a snapshot of every dataset under a particular dataset, you can do:

root #zfs snapshot -r <dataset>@<label you want to name this>

ZFS snapshots are read-only. If you want to create a writeable dataset, use the "clone" command on a snapshot. Example:

If you have a "tank/gentoo/chroot" dataset that you want to make clones from, you would first take a snapshot to save a read only copy, and then we would make how many clones from that as you desire:

zfs snapshot tank/gentoo/chroot@testingBase
zfs clone tank/gentoo@chroot@testingBase tank/gentoo/container1
zfs clone tank/gentoo@chroot@testingBase tank/gentoo/container2
zfs clone tank/gentoo@chroot@testingBase tank/gentoo/container3

If after a while you decide that you want to delete chroot, all of it's snapshots, and all of its dependent clones, you can actually do that in one command:

root #zfs destroy -R tank/gentoo/chroot

Containers

LXC

https://github.com/globalcitizen/lxc-gentoo

https://github.com/specing/lxc-gentoo

See also

  • Stable request — the procedure for moving an ebuild from testing to stable.

References