Setting up systemd-nspawn VMs » History » Revision 6
Revision 5 (Brett Smith, 03/27/2025 09:32 PM) → Revision 6/7 (Brett Smith, 04/01/2025 01:24 PM)
h1. Setting up systemd-nspawn VMs This page describes how to use systemd-nspawn to create VMs for development and testing. This page is a guide, *not* step-by-step instructions. *If you just copy+paste commands without actually reading the instructions, you will BREAK YOUR OWN NETWORKING and I will not be held responsible.* {{toc}} h2. One-time supervisor host setup h3. Install systemd-nspawn and image build tools <pre>sudo apt install systemd-container debootstrap </pre> @systemd-container@ packages systemd-nspawn and friends. @debootstrap@ is used to build VMs. "Install Ansible":https://dev.arvados.org/projects/arvados/wiki/Hacking_prerequisites#Install-Ansible the same way we do for development. I'm fobbing you off to that page so you know what version of Ansible we're standardized on. h3. Enable systemd network services Unsurprisingly systemd-nspawn integrates well with other systemd components. The easiest way to get your VMs networked is to install systemd's network services: <pre>sudo systemctl enable --now systemd-networkd systemd-resolved </pre> Note systemd-networkd only manages configured interfaces. On Debian the default configuration should play nice with NetworkManager. systemd-resolved and NetworkManager also cooperate. If you refuse to do this, refer to the "Networking Options of systemd-nspawn":https://www.freedesktop.org/software/systemd/man/latest/systemd-nspawn.html#Networking%20Options to evaluate alternatives. h3. NAT and firewall systemd-networkd runs a DHCP server that provides private addresses to the virtual machines. You will need to configure your firewall to allow these DHCP requests, and to NAT traffic from those interfaces. These steps are specific to the host firewall; if yours isn't documented below, feel free to add it. h4. ufw For NAT, make sure these lines in @/etc/ufw/sysctl.conf@ are all set to @1@: <pre>net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 net/ipv6/conf/all/forwarding=1 </pre> If you changed any, restart ufw. Then these are the rules you need: <pre><code class="sh">for iface in vb-+ ve-+ vz-+; do sudo ufw rule allow in on "$iface" proto udp to 0.0.0.0/0 port 67,68 comment "systemd-nspawn DHCP" sudo ufw route allow in on "$iface" done </code></pre> h3. Filesystem systemd-nspawn stores both images and containers under @/var/lib/machines@. It works with any filesystem, but if the filesystem is btrfs, it can optimize various operations with snapshots, etc. "Here's a blog post outlining some of the gains":https://idle.nprescott.com/2022/systemd-nspawn-and-btrfs.html. I would recommend any deployment, and especially production deployments, have a btrfs filesystem at @/var/lib/machines@. Since this is likely to grow large, a dedicated partition is a good idea too. h3. Resolving VM names You can configure your host system to resolve the names of running VMs so you can easily SSH into them, open them in your browser, write them in Ansible inventories, etc. Edit @/etc/nsswitch.conf@, find the @hosts@ line, and make sure that @mymachines@ appears before any @dns@ or @resolve@ entries. See "nss-mymachines(2)":https://www.freedesktop.org/software/systemd/man/latest/nss-mymachines.html. h2. Build a systemd-nspawn container image The Arvados source includes an Ansible playbook to create an image from scratch with @debootstrap@. Write this variables inventory file as @nspawn-image.yml@ and edit the values vars as you like: <pre><code class="yaml"># class="yaml">ungrouped: vars: # The name of the VM image to create. image_name: "{{ debootstrap_suite }}" # The codename of the release to install. debootstrap_suite: stable # The mirror to install the release from. # The commented-out setting below is appropriate for Ubuntu. debootstrap_mirror: "http://deb.debian.org/debian" #debootstrap_mirror: "http://archive.ubuntu.com/ubuntu" # The name of the user account to create in the VM. # This sets it to the name of the user running Ansible. image_username: "{{ ansible_user_id }}" # SSH public key string or URL. image_authorized_keys: "FIXME" # A hash of the user's password. The default is no password. # See <https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module> image_passhash: "!" # Other settings for the created user. image_gecos: "" image_shell: /usr/bin/bash hosts: localhost: {} </code></pre> With your Ansible virtualenv activated, run: <pre><code class="sh">ansible-playbook -K -e @YOURVARS.yml -i nspawn-image.yml arvados/tools/ansible/build-debian-nspawn-vm.yml </code></pre> If this succeeds, you have @/var/lib/machines/MACHINE@ with a base install and configuration. h3. Consider Cloning This is probably a good time to mention, you should think about these machine subdirectories more like VM disks rather than Docker images. If you simply boot your new VM and start making changes to it, those changes will be permanent. If you want an ephemeral VM you need to explicitly ask for that. Personally I prefer to never boot this bootstrapped VM directly, instead I run @machinectl clone BASE_NAME MACHINE@—then I treat @BASE_NAME@ like an "image" that I never touch, and @MACHINE@ more like a traditional stateful VM. h2. Configure the VM VMs are configured using the file at @/etc/systemd/nspawn/MACHINE.nspawn@. The defaults are pretty good and you don't have to write much. The main thing you'll want to do is tell it how to resolve DNS, and consider other networking: <pre><code class="ini">[Exec] ResolvConf=bind-uplink [Network] # If you want multiple VMs to be able to talk to each other, # put them all in the same zone: #Zone=YOURZONE [Files] # If you want to make things on the host available in the VM, # do that here: Bind=/dev/fuse #BindReadOnly=/home/YOU/SUBDIR </code></pre> Refer to "systemd.nspawn":https://www.freedesktop.org/software/systemd/man/latest/systemd.nspawn.html for all the options. h2. Privilege a Container If you want to run FUSE, Docker, or Singularity inside your VM, that requires additional privileges. We have an Ansible playbook to automate that too. To grant privileges for all these services, with your Ansible virtualenv activated, run: <pre><code class="sh">ansible-playbook -e container_name=MACHINE arvados/tools/ansible/privilege-nspawn-vm.yml </code></pre> You can exclude some privileges by setting @SERVICE_privileges=absent@. For example, if you don't intend to run Singularity in this VM: <pre><code class="sh">ansible-playbook -e "container_name=MACHINE singularity_privileges=absent" arvados/tools/ansible/privilege-nspawn-vm.yml </code></pre> See the comments at the top of source:tools/ansible/privilege-nspawn-vm.yml for details. h2. Interacting with VMs "machinectl":https://www.freedesktop.org/software/systemd/man/latest/machinectl.html is the primary command to interact with both containers and the underlying disk images: <pre><code class="sh">machinectl start MACHINE machinectl stop MACHINE machinectl shell YOU@MACHINE machinectl clone MACHINE1 MACHINE2 machinectl remove MACHINE [MACHINE2 ...] </code></pre> Refer to the man page for full details. Note that running containers run under the <code>systemd-nspawn@MACHINE</code> systemd service, and you can interact with that with all the usual tools. (Try <code>journalctl -u systemd-nspawn@MACHINE</code>.)