Project

General

Profile

Crunch2 installation » History » Revision 15

Revision 14 (Brett Smith, 08/05/2016 02:13 PM) → Revision 15/18 (Brett Smith, 08/05/2016 02:36 PM)

h1. Crunch2 installation 

 (DRAFT -- when ready, this will move to doc.arvados.org→install) 

 {{toc}} 

 

 h2. Set up a crunch-dispatch service 

 Currently, dispatching containers via SLURM is supported. 

 Install crunch-dispatch-slurm on a node that can submit SLURM jobs. This can be any node appropriately configured to connect to the SLURM controller node. 

 <pre><code class="shell"> 
 $ sudo apt-get install crunch-dispatch-slurm 
 </code></pre> 

 Create a privileged Arvados API token for use by the dispatcher. If you have multiple dispatch processes, you should give each one a different token. 

 <pre><code class="shell"> 
 apiserver:~$ cd /var/www/arvados-api/current 
 apiserver:/var/www/arvados-api/current$ sudo -u webserver-user RAILS_ENV=production bundle exec script/create_superuser_token.rb 
 zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz 
 </code></pre> 

 Make sure crunch-dispatch-slurm runs with Save the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ environment variables set, using token on the token you just generated: dispatch node, in <code>/etc/sv/crunch-dispatch-slurm/env/ARVADOS_API_TOKEN</code> 

 Example runit script (@/etc/sv/crunch-dispatch-slurm/run@): 

 <pre><code class="shell"> 
 $ sudo mkdir /etc/systemd/system/crunch-dispatch-slurm.service.d #!/bin/sh 
 $ sudo install -m 0600 /dev/null /etc/systemd/system/crunch-dispatch-slurm.service.d/api.conf set -e 
 $ sudo editor /etc/systemd/system/crunch-dispatch-slurm.service.d/api.conf exec 2>&1 

 export ARVADOS_API_HOST=uuid_prefix.your.domain 

 exec chpst -e ./env -u crunch crunch-dispatch-slurm 
 </code></pre> 

 Edit the file to look like this: Example runit logging script (@/etc/sv/crunch-dispatch-slurm/log/run@): 

 <pre>[Service] <pre><code class="shell"> 
 Environment=ARVADOS_API_HOST=zzzzz.arvadosapi.com #!/bin/sh 
 Environment=ARVADOS_API_TOKEN=zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz set -e 
 </pre> [ -d main ] || mkdir main 
 exec svlogd -tt ./main 
 </code></pre> 

 Ensure the @crunch@ user on the dispatch node can run Docker containers on SLURM compute nodes via @srun@ or @sbatch@. Depending on your SLURM installation, this may require that the @crunch@ user exist -- and have the same UID, GID, and home directory -- on the dispatch node and all SLURM compute nodes. 

 For example, this should print "OK" (possibly after some extra status/debug messages from SLURM and Docker): docker): 

 <pre> 
 crunch@dispatch:~$ srun -N1 docker run busybox echo OK 
 </pre> 


 


 h2. Install crunch-run on all compute nodes 

 <pre><code class="shell"> 
 sudo apt-get install crunch-run 
 </code></pre> 

 h2. Enable cgroup accounting on all compute nodes 

 (This requirement isn't new for crunch2/containers, but it seems to be a FAQ. The Docker install guide mentions it's optional and performance-degrading, so it's not too surprising if people skip it. Perhaps we should say why/when it's a good idea to enable it?) 

 Check https://docs.docker.com/engine/installation/linux/ for instructions specific to your distribution. 

 For example, on Ubuntu: 
 # Update @/etc/default/grub@ to include: <pre> 
 GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" 
 </pre> 
 # @sudo update-grub@ 
 # Reboot 

 h2. Configure Docker 

 Unchanged from current docs. 

 h2. Configure SLURM cgroups 

 In setups where SLURM uses cgroups to impose resource limits, Crunch can be configured to run its Docker containers inside the cgroup assigned by SLURM. (With the default configuration, docker runs all containers in the "docker" cgroup, which means the SLURM resource limits only apply to crunch-run and arv-mount processes, while the container itself has a separate set of limits imposed by Docker.) 

 To configure SLURM to use cgroups for resource limits, add to @/etc/slurm-llnl/slurm.conf@: 

 <pre> 
 TaskPlugin=task/cgroup 
 </pre> 

 Add to @/etc/slurm-llnl/cgroup.conf@: 

 <pre> 
 CgroupMountpoint=/sys/fs/cgroup 
 ConstrainCores=yes 
 ConstrainDevices=yes 
 ConstrainRAMSpace=yes 
 ConstrainSwapSpace=yes 
 </pre> 

 (See slurm.conf(5) and cgroup.conf(5) for more information.) 

 Add the @-cgroup-parent-subsystem=memory@ option to @/etc/arvados/crunch-dispatch-slurm/config.json@ on the dispatch node: 

 <pre> 
 { 
   "CrunchRunCommand": ["crunch-run", "-cgroup-parent-subsystem=memory"] 
 } 
 </pre> 

 The choice of subsystem ("memory" in this example) must correspond to one of the resource types enabled in @cgroup.conf@. Limits for other resource types will also be respected: the specified subsystem is singled out only to let Crunch determine the name of the cgroup provided by SLURM. 

 Restart crunch-dispatch-slurm to load the new configuration. 

 <pre> 
 root@dispatch:~# sv term /etc/sv/crunch-dispatch-slurm 
 </pre> 

 h2. Test the dispatcher 

 On the dispatch node, start monitoring the crunch-dispatch-slurm logs: 

 <pre><code class="shell"> 
 dispatch-node$ sudo journalctl -fu crunch-dispatch-slurm.service 
 </code></pre> 

 On a shell VM, run a trivial container. 

 <pre><code class="shell"> 
 user@shellvm:~$ arv container_request create --container-request '{ 
   "name":              "test", 
   "state":             "Committed", 
   "priority":          1, 
   "container_image": "arvados/jobs:latest", 
   "command":           ["echo", "Hello, Crunch!"], 
   "output_path":       "/out", 
   "mounts": { 
     "/out": { 
       "kind":          "tmp", 
       "capacity":      1000 
     } 
   }, 
   "runtime_constraints": { 
     "vcpus": 1, 
     "ram": 8388608 
   } 
 }' 
 </code></pre> 

 Measures of success: 
 * Dispatcher log entries will indicate it has submitted a SLURM job. 
   <pre>2016-08-05_13:52:54.73665 2016/08/05 13:52:54 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 started 
 2016-08-05_13:53:04.54148 2016/08/05 13:53:04 About to submit queued container zzzzz-dz642-hdp2vpu9nq14tx0 
 2016-08-05_13:53:04.55305 2016/08/05 13:53:04 sbatch succeeded: Submitted batch job 8102 
 </pre> 
 * Before the container finishes, SLURM's @squeue@ command will show the new job in the list of queued/running jobs. 
   <pre>$ squeue --long 
 Fri Aug    5 13:57:50 2016 
   JOBID PARTITION       NAME       USER      STATE         TIME TIMELIMIT    NODES NODELIST(REASON) 
    8103     compute zzzzz-dz     crunch    RUNNING         1:56 UNLIMITED        1 compute0 
 </pre> The job's name corresponds to the UUID of the container that fulfills this container request.    You can get more information about it by running, e.g., @scontrol show job Name=<UUID>@. 
 * When the container finishes, the dispatcher will log that, with the final result: 
   <pre>2016-08-05_13:53:14.68780 2016/08/05 13:53:14 Container zzzzz-dz642-hdp2vpu9nq14tx0 now in state "Complete" with locked_by_uuid "" 
 2016-08-05_13:53:14.68782 2016/08/05 13:53:14 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 finished 
 </pre> 
 * After the container finishes, @arv container list --limit 1@ will indicate the outcome: <pre> 
 { 
  ... 
  "exit_code":0, 
  "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166", 
  "output":"d41d8cd98f00b204e9800998ecf8427e+0", 
  "state":"Complete", 
  ... 
 } 
 </pre> You can use standard Keep tools to view the job's output and logs from their corresponding fields.    For example, to see the logs: 
   <pre>$ arv keep ls a01df2f7e5bc1c2ad59c60a837e90dc6+166 
 ./crunch-run.txt 
 ./stderr.txt 
 ./stdout.txt 
 $ arv keep get a01df2f7e5bc1c2ad59c60a837e90dc6+166/stdout.txt 
 2016-08-05T13:53:06.201011Z Hello, Crunch! 
 </pre>