Project

General

Profile

Actions

Idea #11139

closed

[Node manager] Expected MemTotal for each cloud node size

Added by Peter Amstutz about 7 years ago. Updated about 7 years ago.

Status:
Resolved
Priority:
Normal
Assigned To:
Category:
-
Target version:
Start date:
03/07/2017
Due date:
Story points:
0.5

Description

There's a discrepancy between the RAM of a VM used to choose what size node to boot for a job, and the actual amount of memory available to the job. If a job falls in the "donut hole", the job will be unable to run because the request is larger than the actual memory available, but node manager won't boot up a properly sized node because it will believe that the job is satisfied.

tetron@compute3.c97qk:/usr/local/share/arvados-compute-ping-controller.d$ awk '($1 == "MemTotal:"){print ($2 / 1024)}' </proc/meminfo
3440.54
df -m /tmp | perl -e '
> my $index = index(<>, " 1M-blocks ");
> substr(<>, 0, $index + 10) =~ / (\d+)$/;
> print "$1\n";
> '
51170
tetron@compute3.c97qk:/usr/local/share/arvados-compute-ping-controller.d$ sinfo -n compute3 --format "%c %m %d" 
CPUS MEMORY TMP_DISK
1 3440 51169
>>> szd["Standard_D1_v2"]
<NodeSize: id=Standard_D1_v2, name=Standard_D1_v2, ram=3584 disk=50 bandwidth=0 price=0 driver=Azure Virtual machines ...>
>>> 

For Standard_D1_v2 there is a ~144 MiB discrepancy between the advertised RAM size and the amount of RAM considered available by Linux.

CPUS MEMORY TMP_DISK
2 6968 102344
<NodeSize: id=Standard_D2_v2, name=Standard_D2_v2, ram=7168 disk=100 bandwidth=0 price=0 driver=Azure Virtual machines ...>

For Standard_D1_v2 it is 200 MiB.

Based on discussion: node manager should reduce the RAM size for node by 5% from the "sticker value" in the ServerCalculator (jobqueue.py)

The scale factor should be settable in the configuration file.


Subtasks 1 (0 open1 closed)

Task #11200: Review 11139-nodemanager-mem-scale-factorResolvedPeter Amstutz03/07/2017Actions
Actions #1

Updated by Peter Amstutz about 7 years ago

  • Description updated (diff)
Actions #2

Updated by Peter Amstutz about 7 years ago

  • Description updated (diff)
Actions #3

Updated by Peter Amstutz about 7 years ago

GCP

>>> [s for s in sz if s.name == "n1-standard-1"]
[<NodeSize: id=3001, name=n1-standard-1, ram=3840 disk=None bandwidth=0 price=None driver=Google Compute Engine ...>]
manage.qr2hi:/etc/arvados-node-manager[0/1]# sinfo -ncompute8 --format="%c %m %d" 
CPUS MEMORY TMP_DISK
1 3711 383806

129 MiB difference on GCP for a small node.

Actions #4

Updated by Peter Amstutz about 7 years ago

  • Description updated (diff)
Actions #5

Updated by Peter Amstutz about 7 years ago

This can be fixed entirely through configuration by overriding the "ram" field for each size. However, we need determine the right numbers to use for each size. There doesn't seem to be a good way to get that information without just going through and booting a node of each size and recording the value of MemTotal.

Actions #6

Updated by Peter Amstutz about 7 years ago

  • Tracker changed from Bug to Idea
  • Subject changed from Different node size calculations. to [Node manager] Script to determine actual MemTotal for each cloud node size
  • Description updated (diff)
Actions #7

Updated by Peter Amstutz about 7 years ago

Based on discussion: reduce the predicted RAM available for node by 5%

Actions #8

Updated by Peter Amstutz about 7 years ago

  • Description updated (diff)
  • Story points set to 0.5
Actions #9

Updated by Peter Amstutz about 7 years ago

  • Subject changed from [Node manager] Script to determine actual MemTotal for each cloud node size to [Node manager] Expected MemTotal for each cloud node size
Actions #10

Updated by Tom Morris about 7 years ago

  • Target version set to 2017-03-15 sprint
Actions #11

Updated by Peter Amstutz about 7 years ago

  • Assigned To set to Peter Amstutz
Actions #12

Updated by Lucas Di Pentima about 7 years ago

  • Assigned To changed from Peter Amstutz to Lucas Di Pentima
Actions #13

Updated by Peter Amstutz about 7 years ago

  • Description updated (diff)
Actions #14

Updated by Lucas Di Pentima about 7 years ago

  • Status changed from New to In Progress
Actions #15

Updated by Lucas Di Pentima about 7 years ago

Updates on 11139-nodemanager-mem-scale-factor branch at b6423b5
Test run: https://ci.curoverse.com/job/developer-run-tests/183/

Added node_mem_scaling config parameter to be applied to node size ram. Default value 0.95.

Actions #16

Updated by Peter Amstutz about 7 years ago

Can you add a test that shows that changing the configuration parameter changes the server calculator.

Can you add a note about node_mem_scaling to each of the sample configuration files in arvados/services/nodemanager/doc

Thanks!

Actions #17

Updated by Lucas Di Pentima about 7 years ago

Updates & master branch merge at 101e3227b25c16874fa73660bfd7e338fbfe0da2
Tests: https://ci.curoverse.com/job/developer-run-tests/184/

I've simulated passing a custom node_mem_scaling value on the ServerCalculator call at test_jobqueue.py, if you meant to test the entire call chain from arvnodeman.launcher, I could add an import to it and call the build_server_calculator() function.

Actions #18

Updated by Peter Amstutz about 7 years ago

Thanks, LGTM

Actions #19

Updated by Lucas Di Pentima about 7 years ago

  • Status changed from In Progress to Resolved

Applied in changeset arvados|commit:4812b5639cfaf724540786fcd331aaa227635c77.

Actions

Also available in: Atom PDF