Project

General

Profile

Actions

Bug #22185

closed

fix tordo compute image to support cgroup limits with singularity

Added by Peter Amstutz 19 days ago. Updated 13 days ago.

Status:
Resolved
Priority:
Normal
Assigned To:
Category:
Deployment
Story points:
-

Subtasks 1 (0 open1 closed)

Task #22195: Review 22185-singularity-cgroups-v1ResolvedTom Clegg10/12/2024Actions
Actions #1

Updated by Tom Clegg 19 days ago

  • Assigned To set to Tom Clegg
Actions #2

Updated by Peter Amstutz 19 days ago

Tom to figure out exactly what is wrong and how to fix it, will work with ops to update packer & deploy new image if necessary.

Actions #3

Updated by Tom Clegg 18 days ago

tordo's compute nodes use cgroups v2 "unified" mode (i.e., both cgroups v1 and cgroups v2 interfaces are available). But they fail crunch-run's test for cgroups2 memory/cpu limit support because cgroup.controllers is empty:

root@ip-10-253-254-63:~# grep cgroup2 /proc/mounts 
cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0
root@ip-10-253-254-63:~# cat /sys/fs/cgroup/unified/cgroup.controllers 

Singularity is in fact capable of enforcing limits:

root@ip-10-253-254-63:~# singularity exec --containall --cleanenv --pwd= --memory 123456789 --cpus 1  docker://busybox:uclibc echo ok
INFO:    Using cached SIF image
ok
root@ip-10-253-254-63:~# singularity exec --containall --cleanenv --pwd= --memory 1234 --cpus 1  docker://busybox:uclibc echo ok
INFO:    Using cached SIF image
Killed

AFAICT this means singularity is using the cgroups v1 interface.

root@ip-10-253-254-63:~# grep -Ew 'memory|cpu' /proc/self/cgroup 
6:cpu,cpuacct:/user.slice
4:memory:/user.slice/user-1000.slice/session-5.scope

If your system is using cgroups v1 then you can only use the CLI resource limit flags or --apply-cgroups when running containers as the root user.

So, when the cgroups v2 support check fails, and we're running as root, we can enable limits based on cgroups v1 support if the relevant controller names appear in /proc/self/cgroup.

Actions #4

Updated by Tom Clegg 18 days ago

  • Status changed from New to In Progress
Actions #5

Updated by Peter Amstutz 18 days ago

Isn't cgroups v1 on its way out, though? Can singularity use cgroups v2? Why is /sys/fs/cgroup/unified/cgroup.controllers empty?

Actions #6

Updated by Tom Clegg 18 days ago

Yes, cgroups v1 is on its way out and yes, Singularity can use cgroups v2.

I think I've found the answer to why /sys/fs/cgroup/unified/cgroup.controllers is empty, and it is indeed only an issue when cgroups v2 is being used in "unified" mode, i.e., v1 is also available.
  • From 'mounting' section of cgroup docs
    • "A controller can be moved across hierarchies only after the controller is no longer referenced in its current hierarchy."
    • "During transition to v2, system management software might still automount the v1 cgroup filesystem and so hijack all controllers during boot"
  • Restated more pointedly in a stackoverflow answer
    • "cgroup controllers can only be mounted in one hierarchy (v1 or v2). If you have a controller mounted on a legacy v1 hierarchy, then it won't show up in the cgroup2 hiearchy"
Actions #8

Updated by Lucas Di Pentima 17 days ago

LGTM, thanks.

Actions #9

Updated by Tom Clegg 16 days ago

  • Status changed from In Progress to Resolved
Actions

Also available in: Atom PDF