Project

General

Profile

Actions

Crunch2 installation » History » Revision 12

« Previous | Revision 12/18 (diff) | Next »
Tom Clegg, 08/02/2016 02:08 PM


Crunch2 installation

(DRAFT -- when ready, this will move to doc.arvados.org→install)

Set up a crunch-dispatch service

Currently, dispatching containers via SLURM is supported.

Install crunch-dispatch-slurm on a node that can submit SLURM jobs. This can be any node appropriately configured to connect to the SLURM controller node.

sudo apt-get install crunch-dispatch-slurm

Create a privileged Arvados API token for use by the dispatcher. If you have multiple dispatch processes, you should give each one a different token.

apiserver:~$ cd /var/www/arvados-api/current
apiserver:/var/www/arvados-api/current$ sudo -u webserver-user RAILS_ENV=production bundle exec script/create_superuser_token.rb
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz

Save the token on the dispatch node, in /etc/sv/crunch-dispatch-slurm/env/ARVADOS_API_TOKEN

Example runit script (/etc/sv/crunch-dispatch-slurm/run):

#!/bin/sh
set -e
exec 2>&1

export ARVADOS_API_HOST=uuid_prefix.your.domain

exec chpst -e ./env -u crunch crunch-dispatch-slurm

Example runit logging script (/etc/sv/crunch-dispatch-slurm/log/run):

#!/bin/sh
set -e
[ -d main ] || mkdir main
exec svlogd -tt ./main

Ensure the crunch user on the dispatch node can run Docker containers on SLURM compute nodes via srun or sbatch. Depending on your SLURM installation, this may require that the crunch user exist -- and have the same UID, GID, and home directory -- on the dispatch node and all SLURM compute nodes.

For example, this should print "OK" (possibly after some extra status/debug messages from SLURM and docker):

crunch@dispatch:~$ srun -N1 docker run busybox echo OK

Install crunch-run on all compute nodes

sudo apt-get install crunch-run

Enable cgroup accounting on all compute nodes

(This requirement isn't new for crunch2/containers, but it seems to be a FAQ. The Docker install guide mentions it's optional and performance-degrading, so it's not too surprising if people skip it. Perhaps we should say why/when it's a good idea to enable it?)

Check https://docs.docker.com/engine/installation/linux/ for instructions specific to your distribution.

For example, on Ubuntu:
  1. Update /etc/default/grub to include:
    GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" 
    
  2. sudo update-grub
  3. Reboot

Configure Docker

Unchanged from current docs.

Configure SLURM cgroups

In setups where SLURM uses cgroups to impose resource limits, Crunch can be configured to run its Docker containers inside the cgroup assigned by SLURM. (With the default configuration, docker runs all containers in the "docker" cgroup, which means the SLURM resource limits only apply to crunch-run and arv-mount processes, while the container itself has a separate set of limits imposed by Docker.)

To configure SLURM to use cgroups for resource limits, add to /etc/slurm-llnl/slurm.conf:

TaskPlugin=task/cgroup

Add to /etc/slurm-llnl/cgroup.conf:

CgroupMountpoint=/sys/fs/cgroup
ConstrainCores=yes
ConstrainDevices=yes
ConstrainRAMSpace=yes
ConstrainSwapSpace=yes

(See slurm.conf(5) and cgroup.conf(5) for more information.)

Add the -cgroup-parent-subsystem=memory option to /etc/arvados/crunch-dispatch-slurm/config.json on the dispatch node:

{
  "CrunchRunCommand": ["crunch-run", "-cgroup-parent-subsystem=memory"]
}

The choice of subsystem ("memory" in this example) must correspond to one of the resource types enabled in cgroup.conf. Limits for other resource types will also be respected: the specified subsystem is singled out only to let Crunch determine the name of the cgroup provided by SLURM.

Restart crunch-dispatch-slurm to load the new configuration.

root@dispatch:~# sv term /etc/sv/crunch-dispatch-slurm

Test the dispatcher

On the dispatch node, monitor the crunch-dispatch logs.

dispatch-node$ tail -F /etc/sv/crunch-dispatch-slurm/log/main/current

(TODO: Add example startup logs from crunch-dispatch-slurm)

On a shell VM, install a Docker image for testing.

user@shellvm:~$ arv keep docker busybox

(TODO: Add example log/debug messages)

On a shell VM, run a trivial container.

user@shellvm:~$ arv container_request create --container-request '{
  "name":            "test",
  "state":           "Committed",
  "priority":        1,
  "container_image": "busybox",
  "command":         ["true"],
  "output_path":     "/out",
  "mounts": {
    "/out": {
      "kind":        "tmp",
      "capacity":    1000
    }
  }
}'
Measures of success:
  • Dispatcher log entries will indicate it has submitted a SLURM job. (TODO: Add example logs.)
  • Before the container finishes, SLURM's squeue command will show the new job in the list of queued/running jobs. (TODO: Add squeue output, showing how containers look there.)
  • After the container finishes, arv container list --limit 1 will indicate the outcome:
    {
     ...
     "exit_code":0,
     ...
     "state":"Complete",
     ...
    }
    

Updated by Tom Clegg almost 8 years ago · 12 revisions