Project

General

Profile

Hacking Node Manager » History » Version 7

Peter Amstutz, 11/02/2015 01:50 PM

1 1 Brett Smith
h1. Hacking Node Manager
2
3
h2. Important dependencies
4
5
h3. libcloud
6
7
"Apache Libcloud":https://libcloud.readthedocs.org/en/latest/ gives us a consistent interface to manage compute nodes across different cloud providers.
8
9
h3. Pykka
10
11
The Node Manager uses "Pykka":http://www.pykka.org/en/latest/ to easily set up lots of small workers in a multithreaded environment.  You'll probably want to read that introduction before you get started.  The Node Manager makes heavy use of Pykka's proxies.
12
13
h2. Overview - Subscriptions
14
15
Most of the actors in the Node Manager only need to communicate to others about one kind of event:
16
17
* ArvadosNodeListMonitorActor: updated information about Arvados Node objects
18
* ComputeNodeListMonitorActor: updated information about compute nodes running in the cloud
19
* JobQueueMonitorActor: updated information about the number and sizes of compute nodes that would best satisfy the job queue
20
* ComputeNodeSetupActor: compute node setup is finished
21
* ComputeNodeShutdownActor: compute node is successfully shut down
22
* ComputeNodeActor: compute node is eligible for shutdown
23
24
These communications happen through subscriptions.  Each actor has a @subscribe@ method that takes an arbitrary callable object, usually a proxy method.  Those callables are called with new information whenever there's a state change.
25
26
List monitor actors also have a @subscribe_to@ method that calls the callable on every update, with information about one specific object in the response (e.g., every update about an Arvados node with a specific UUID).
27
28
Thanks to this pattern, it's rare for our code to directly use the Future objects that are returned from proxy methods.  Instead, the different actors send messages to each other about interesting state changes.  The 30,000-foot overview of the program is:
29
30
* Start the list monitor actors
31
* Start the NodeManagerDaemonActor.  It subscribes to those monitors.
32
* The daemon creates different compute node actors to manage different points of the node's lifecycle, and subscribes to their updates as well.
33
* When the daemon creates a ComputeNodeActor, it subscribes that new actor to updates from the list monitor about the underlying cloud and Arvados data.
34
35
See @launcher.py@, and the @update_cloud_nodes@ and @update_arvados_nodes@ methods in @daemon.py@.
36
37 6 Peter Amstutz
h2. Node life cycle
38
39 7 Peter Amstutz
# JobQueueMonitorActor observes that are pending jobs in the queue, publishes "wishlist" of desired node sizes to @NodeManagerDaemonActor@
40 6 Peter Amstutz
# @NodeManagerDaemonActor.update_server_wishlist()@ checks if @nodes_wanted()@ is nonzero (meaning there is either a deficit or surplus of nodes)
41
# If @nodes_wanted() > 0@ call @NodeManagerDaemonActor.start_node()@
42
# @NodeManagerDaemonActor.start_node()@ creates a @ComputeNodeSetupActor@ and adds it to the "booting" list
43
# When @ComputeNodeSetupActor@ completes, it signals back to @NodeManagerDaemonActor.node_up()@
44
# @NodeManagerDaemonActor.node_up()@ removes the node from the "booting" list and puts it into the "booted" list, then starts @ComputeNodeMonitorActor@
45 7 Peter Amstutz
# @CloudNodeListMonitorActor@ triggers @NodeManagerDaemonActor.update_cloud_nodes()@; when a node in the "booted" list shows up in @cloud_nodes@
46 1 Brett Smith
 it is removed from "booted".
47 7 Peter Amstutz
# @NodeManagerDaemonActor.update_cloud_nodes()@ pairs the cloud node with an arvados record based on:
48 6 Peter Amstutz
## if @last_ping_at@ is after when the cloud node was booted
49 7 Peter Amstutz
## if the IP address of the cloud node matches the IP address in the Arvados node record
50
51 6 Peter Amstutz
52 3 Brett Smith
h2. Test Strategy
53 1 Brett Smith
54 3 Brett Smith
The subscription pattern simplifies testing with mocks.  Each test starts at most one actor.  We send messages to that actor with mock data, and then check the results through a mock subscriber or client objects.  As long as you can commit to particular message semantics, this makes it possible to write well-isolated, fast tests.  @testutil.py@ provides rich mocks for different kinds of objects, as well as a Mixin class to help test actors.
55
56
The tests frequently block on the result of proxy methods—i.e., they call @proxy.method().get(self.TIMEOUT)@.  This helps ensure that we know as much as possible about the actor's state before we proceed.  It also has the benefit of keeping the tests speedy, by reducing contention for Python's global interpreter lock.  Sometimes, when the tests need to ensure that an actor has handled its own internal messages generated by an event, we send another message and block on that—conventionally either a stop message, or a noop attribute access.  This ties the tests more closely to the implementation than is ideal, but it was the only solution I could find under time pressure that ran reliably on Jenkins.
57
58
h3. Why you can't check internal message handling through the actor inbox
59
60
One strategy I tried is polling the actor's message inbox, with the plan to only proceed when it is empty.  Unfortunately, this doesn't work, because messages are removed from the inbox before handling begins.  This means that if there's one message in the inbox, and handling it will generate another message, the inbox will be empty from the time processing the first message begins and the time the generated message is queued.  By itself, this is not a reliable way to ensure that an actor will not generate and handle any more internal messages.
61 1 Brett Smith
62
h2. Driver wrappers
63
64
When we start a compute node, we need to seed it with information from the associated Arvados node object.  The mechanisms to pass that information will be different for each cloud provider.  To accommodate this, there are driver classes under @arvnodeman.computenode@ that handle the translation.  They also proxy public methods from the "real" libcloud driver, so except for the @create_node@ method, you can usually use libcloud's standard interfaces on our custom drivers.
65
66 4 Brett Smith
h2. Manually testing driver wrappers
67
68
When the rubber hits the road, you want to be able to test the driver wrapper with the real cloud to make sure that it makes all its API calls correctly to configure compute nodes as needed.  It's relatively straightforward to do that interactively, without running all of Node Manager.
69
70
First, you'll need this test harness.  Save it alongside your development copy of the @arvnodeman@ module, as @drivertest.py@.
71
72
<pre><code class="python"># Use with `python -i` to set up a shell to interact with Node Manager
73
# drivers.
74
75
import sys
76
77
from pprint import pprint
78
79
from arvnodeman import config
80
81
try:
82
    conf_filename = sys.argv[1]
83
except IndexError:
84
    conf_filename = 'test.cfg'
85
86
myconf = config.NodeManagerConfig()
87
with open(conf_filename) as f:
88
    myconf.readfp(f)
89
90
arvnode = {
91
    'uuid': 'zyxwv-7ekkf-brettbrettbrett',
92
    'info': {'ping_secret': 'fakesecret'},
93
    'hostname': 'fakename',
94
    'domain': 'zyxwv.arvadosapi.com',
95
    }
96
97
driver = myconf.new_cloud_client()
98
rawsizes = driver.list_sizes()
99
sizelist = myconf.node_sizes(rawsizes)
100
if sizelist:
101
    size = sizelist[0][0]
102
</code></pre>
103
104
Then:
105
106
# Write a Node Manager configuration file for the driver you want to test.  The cloud settings need to be as real as possible.  Other settings need to validate but can be non-functional (e.g., you can fill in a nonexistent API server hostname and credentials)—these tests won't exercise them.  DO BE CAREFUL to make sure your configuration won't interfere with production operations on a running cluster.  For example, it's good to change the set of tags that identify a compute node, so a production Node Manager won't see your test nodes, and vice versa.
107
# Run @python -i drivertest.py YOURCONFIG.ini@.  It downloads basic cloud information, so it normally takes a moment for your Python prompt to appear.
108
109
Once your test interpreter is running, you have a few objects you can play with, and a few methods you'll probably want to call:
110
111
* @arvnode@ is a fake dictionary representing an Arvados compute node API object.  It has all the fields a cloud driver should need to create a cloud node.
112
* @size@ is libcloud's size object for the smallest node size defined in your configuration.
113
* @driver@ is the Node Manager driver wrapper for the cloud specified in your configuration.
114
* To create a node: @cloudnode = driver.create_node(size, arvnode)@
115
* To update the node's metadata: @driver.sync_node(cloud_node, arvnode)@
116
* To destroy the cloud node when you're done: @driver.destroy_node(cloud_node)@
117
* Of course, you should feel free to call any driver class or instance method with the data available to you.
118
119 1 Brett Smith
h2. Configuration
120
121 2 Brett Smith
@doc/ec2.example.cfg@ has lots of comments describing what parameters are available and how they behave.  Bear in mind that settings in Cloud and Size subsections are specific to the provider named in the main Cloud section.
122 1 Brett Smith
123
@doc/local.example.cfg@ lets you run a development node manager, backed by libcloud's dummy driver and your development Arvados API server.  Refer to the instructions at the top of that file.
124 5 Ward Vandewege
125
h3. Arvados Token
126
127
Node manager requires a scoped Arvados token. Create it like this as a user with an admin token:
128
129
<pre>
130
  arv api_client_authorization create_system_auth \
131
    --scopes "[\"GET /arvados/v1/jobs/queue\",
132
               \"GET /arvados/v1/nodes\",
133
               \"PUT /arvados/v1/nodes\",
134
               \"PUT /arvados/v1/nodes/\",
135
               \"POST /arvados/v1/nodes\",
136
               \"POST /arvados/v1/nodes/\"]"
137
</pre>