Crunch2 installation¶
(DRAFT -- when ready, this will move to doc.arvados.org→install)
- Table of contents
- Crunch2 installation
Set up a crunch-dispatch service¶
Currently, dispatching containers via SLURM is supported.
Install crunch-dispatch-slurm on a node that can submit SLURM jobs. This can be any node appropriately configured to connect to the SLURM controller node.
$ sudo apt-get install crunch-dispatch-slurm
Create a privileged Arvados API token for use by the dispatcher. If you have multiple dispatch processes, you should give each one a different token.
apiserver:~$ cd /var/www/arvados-api/current
apiserver:/var/www/arvados-api/current$ sudo -u webserver-user RAILS_ENV=production bundle exec script/create_superuser_token.rb
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Configure crunch-dispatch-slurm to authenticate to your API server using the token you just generated:
$ sudo mkdir -p /etc/arvados
$ sudo install -d -o -root -g crunch -m 0770 /etc/arvados/crunch-dispatch-slurm
Edit /etc/arvados/crunch-dispatch-slurm/config.json
to look like this:
{
"Client": {
"APIHost": "zzzzz.arvadosapi.com",
"AuthToken": "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz"
}
}
Ensure the crunch
user on the dispatch node can run Docker containers on SLURM compute nodes via srun
or sbatch
. Depending on your SLURM installation, this may require that the crunch
user exist -- and have the same UID, GID, and home directory -- on the dispatch node and all SLURM compute nodes.
For example, this should print "OK" (possibly after some extra status/debug messages from SLURM and Docker):
crunch@dispatch:~$ srun -N1 docker run busybox echo OK
Install crunch-run on all compute nodes¶
sudo apt-get install crunch-run
Enable cgroup accounting on all compute nodes¶
(This requirement isn't new for crunch2/containers, but it seems to be a FAQ. The Docker install guide mentions it's optional and performance-degrading, so it's not too surprising if people skip it. Perhaps we should say why/when it's a good idea to enable it?)
Check https://docs.docker.com/engine/installation/linux/ for instructions specific to your distribution.
For example, on Ubuntu:- Update
/etc/default/grub
to include:GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
sudo update-grub
- Reboot
Configure Docker¶
Unchanged from current docs.
Configure SLURM cgroups¶
In setups where SLURM uses cgroups to impose resource limits, Crunch can be configured to run its Docker containers inside the cgroup assigned by SLURM. (With the default configuration, docker runs all containers in the "docker" cgroup, which means the SLURM resource limits only apply to crunch-run and arv-mount processes, while the container itself has a separate set of limits imposed by Docker.)
To configure SLURM to use cgroups for resource limits, add to /etc/slurm-llnl/slurm.conf
:
TaskPlugin=task/cgroup
Add to /etc/slurm-llnl/cgroup.conf
:
CgroupMountpoint=/sys/fs/cgroup ConstrainCores=yes ConstrainDevices=yes ConstrainRAMSpace=yes ConstrainSwapSpace=yes
(See slurm.conf(5) and cgroup.conf(5) for more information.)
Add the -cgroup-parent-subsystem=memory
option to /etc/arvados/crunch-dispatch-slurm/config.json
on the dispatch node:
{ "CrunchRunCommand": ["crunch-run", "-cgroup-parent-subsystem=memory"] }
The choice of subsystem ("memory" in this example) must correspond to one of the resource types enabled in cgroup.conf
. Limits for other resource types will also be respected: the specified subsystem is singled out only to let Crunch determine the name of the cgroup provided by SLURM.
Restart crunch-dispatch-slurm to load the new configuration.
root@dispatch:~# sv term /etc/sv/crunch-dispatch-slurm
Test the dispatcher¶
On the dispatch node, start monitoring the crunch-dispatch-slurm logs:
dispatch-node$ sudo journalctl -fu crunch-dispatch-slurm.service
On a shell VM, run a trivial container.
user@shellvm:~$ arv container_request create --container-request '{
"name": "test",
"state": "Committed",
"priority": 1,
"container_image": "arvados/jobs:latest",
"command": ["echo", "Hello, Crunch!"],
"output_path": "/out",
"mounts": {
"/out": {
"kind": "tmp",
"capacity": 1000
}
},
"runtime_constraints": {
"vcpus": 1,
"ram": 8388608
}
}'
Measures of success:
- Dispatcher log entries will indicate it has submitted a SLURM job.
2016-08-05_13:52:54.73665 2016/08/05 13:52:54 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 started 2016-08-05_13:53:04.54148 2016/08/05 13:53:04 About to submit queued container zzzzz-dz642-hdp2vpu9nq14tx0 2016-08-05_13:53:04.55305 2016/08/05 13:53:04 sbatch succeeded: Submitted batch job 8102
- Before the container finishes, SLURM's
squeue
command will show the new job in the list of queued/running jobs.
$ squeue --long Fri Aug 5 13:57:50 2016 JOBID PARTITION NAME USER STATE TIME TIMELIMIT NODES NODELIST(REASON) 8103 compute zzzzz-dz crunch RUNNING 1:56 UNLIMITED 1 compute0
The job's name corresponds to the UUID of the container that fulfills this container request. You can get more information about it by running, e.g.,scontrol show job Name=<UUID>
. - When the container finishes, the dispatcher will log that, with the final result:
2016-08-05_13:53:14.68780 2016/08/05 13:53:14 Container zzzzz-dz642-hdp2vpu9nq14tx0 now in state "Complete" with locked_by_uuid "" 2016-08-05_13:53:14.68782 2016/08/05 13:53:14 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 finished
- After the container finishes,
arv container list --limit 1
will indicate the outcome:{ ... "exit_code":0, "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166", "output":"d41d8cd98f00b204e9800998ecf8427e+0", "state":"Complete", ... }
You can use standard Keep tools to view the job's output and logs from their corresponding fields. For example, to see the logs:
$ arv keep ls a01df2f7e5bc1c2ad59c60a837e90dc6+166 ./crunch-run.txt ./stderr.txt ./stdout.txt $ arv keep get a01df2f7e5bc1c2ad59c60a837e90dc6+166/stdout.txt 2016-08-05T13:53:06.201011Z Hello, Crunch!
Updated by Brett Smith over 8 years ago · 18 revisions