Project

General

Profile

Actions

Setting up systemd-nspawn VMs » History » Revision 6

« Previous | Revision 6/7 (diff) | Next »
Brett Smith, 04/01/2025 01:24 PM
update playbook instructions to use a vars file over facts (which have too-low precedence)


Setting up systemd-nspawn VMs

This page describes how to use systemd-nspawn to create VMs for development and testing. This page is a guide, not step-by-step instructions. If you just copy+paste commands without actually reading the instructions, you will BREAK YOUR OWN NETWORKING and I will not be held responsible.

One-time supervisor host setup

Install systemd-nspawn and image build tools

sudo apt install systemd-container debootstrap

systemd-container packages systemd-nspawn and friends. debootstrap is used to build VMs.

Install Ansible the same way we do for development. I'm fobbing you off to that page so you know what version of Ansible we're standardized on.

Enable systemd network services

Unsurprisingly systemd-nspawn integrates well with other systemd components. The easiest way to get your VMs networked is to install systemd's network services:

sudo systemctl enable --now systemd-networkd systemd-resolved

Note systemd-networkd only manages configured interfaces. On Debian the default configuration should play nice with NetworkManager. systemd-resolved and NetworkManager also cooperate.

If you refuse to do this, refer to the Networking Options of systemd-nspawn to evaluate alternatives.

NAT and firewall

systemd-networkd runs a DHCP server that provides private addresses to the virtual machines. You will need to configure your firewall to allow these DHCP requests, and to NAT traffic from those interfaces. These steps are specific to the host firewall; if yours isn't documented below, feel free to add it.

ufw

For NAT, make sure these lines in /etc/ufw/sysctl.conf are all set to 1:

net/ipv4/ip_forward=1
net/ipv6/conf/default/forwarding=1
net/ipv6/conf/all/forwarding=1

If you changed any, restart ufw. Then these are the rules you need:

for iface in vb-+ ve-+ vz-+; do
  sudo ufw rule  allow in on "$iface" proto udp to 0.0.0.0/0 port 67,68 comment "systemd-nspawn DHCP" 
  sudo ufw route allow in on "$iface" 
done

Filesystem

systemd-nspawn stores both images and containers under /var/lib/machines. It works with any filesystem, but if the filesystem is btrfs, it can optimize various operations with snapshots, etc. Here's a blog post outlining some of the gains.

I would recommend any deployment, and especially production deployments, have a btrfs filesystem at /var/lib/machines. Since this is likely to grow large, a dedicated partition is a good idea too.

Resolving VM names

You can configure your host system to resolve the names of running VMs so you can easily SSH into them, open them in your browser, write them in Ansible inventories, etc. Edit /etc/nsswitch.conf, find the hosts line, and make sure that mymachines appears before any dns or resolve entries. See nss-mymachines.

Build a systemd-nspawn container image

The Arvados source includes an Ansible playbook to create an image from scratch with debootstrap. Write this variables file as nspawn-image.yml and edit the values as you like:

# The name of the VM image to create.
image_name: "{{ debootstrap_suite }}" 
# The codename of the release to install.
debootstrap_suite: stable
# The mirror to install the release from.
# The commented-out setting below is appropriate for Ubuntu.
debootstrap_mirror: "http://deb.debian.org/debian" 
#debootstrap_mirror: "http://archive.ubuntu.com/ubuntu" 

# The name of the user account to create in the VM.
# This sets it to the name of the user running Ansible.
image_username: "{{ ansible_user_id }}" 
# SSH public key string or URL.
image_authorized_keys: "FIXME" 
# A hash of the user's password. The default is no password.
# See <https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module>
image_passhash: "!" 
# Other settings for the created user.
image_gecos: "" 
image_shell: /usr/bin/bash

With your Ansible virtualenv activated, run:

ansible-playbook -K -e @YOURVARS.yml arvados/tools/ansible/build-debian-nspawn-vm.yml

If this succeeds, you have /var/lib/machines/MACHINE with a base install and configuration.

Consider Cloning

This is probably a good time to mention, you should think about these machine subdirectories more like VM disks rather than Docker images. If you simply boot your new VM and start making changes to it, those changes will be permanent. If you want an ephemeral VM you need to explicitly ask for that. Personally I prefer to never boot this bootstrapped VM directly, instead I run machinectl clone BASE_NAME MACHINE—then I treat BASE_NAME like an "image" that I never touch, and MACHINE more like a traditional stateful VM.

Configure the VM

VMs are configured using the file at /etc/systemd/nspawn/MACHINE.nspawn. The defaults are pretty good and you don't have to write much. The main thing you'll want to do is tell it how to resolve DNS, and consider other networking:

[Exec]
ResolvConf=bind-uplink

[Network]
# If you want multiple VMs to be able to talk to each other,
# put them all in the same zone:
#Zone=YOURZONE

[Files]
# If you want to make things on the host available in the VM,
# do that here:
Bind=/dev/fuse
#BindReadOnly=/home/YOU/SUBDIR

Refer to systemd.nspawn for all the options.

Privilege a Container

If you want to run FUSE, Docker, or Singularity inside your VM, that requires additional privileges. We have an Ansible playbook to automate that too. To grant privileges for all these services, with your Ansible virtualenv activated, run:

ansible-playbook -e container_name=MACHINE arvados/tools/ansible/privilege-nspawn-vm.yml

You can exclude some privileges by setting SERVICE_privileges=absent. For example, if you don't intend to run Singularity in this VM:

ansible-playbook -e "container_name=MACHINE singularity_privileges=absent" arvados/tools/ansible/privilege-nspawn-vm.yml

See the comments at the top of source:tools/ansible/privilege-nspawn-vm.yml for details.

Interacting with VMs

machinectl is the primary command to interact with both containers and the underlying disk images:

machinectl start MACHINE
machinectl stop MACHINE
machinectl shell YOU@MACHINE

machinectl clone MACHINE1 MACHINE2
machinectl remove MACHINE [MACHINE2 ...]

Refer to the man page for full details. Note that running containers run under the systemd-nspawn@MACHINE systemd service, and you can interact with that with all the usual tools. (Try journalctl -u systemd-nspawn@MACHINE.)

Updated by Brett Smith 1 day ago · 7 revisions