Hacking Node Manager » History » Revision 1
Revision 1/9
| Next »
Brett Smith, 10/03/2014 09:38 PM
Hacking Node Manager¶
Important dependencies¶
libcloud¶
Apache Libcloud gives us a consistent interface to manage compute nodes across different cloud providers.
Pykka¶
The Node Manager uses Pykka to easily set up lots of small workers in a multithreaded environment. You'll probably want to read that introduction before you get started. The Node Manager makes heavy use of Pykka's proxies.
Overview - Subscriptions¶
Most of the actors in the Node Manager only need to communicate to others about one kind of event:
- ArvadosNodeListMonitorActor: updated information about Arvados Node objects
- ComputeNodeListMonitorActor: updated information about compute nodes running in the cloud
- JobQueueMonitorActor: updated information about the number and sizes of compute nodes that would best satisfy the job queue
- ComputeNodeSetupActor: compute node setup is finished
- ComputeNodeShutdownActor: compute node is successfully shut down
- ComputeNodeActor: compute node is eligible for shutdown
These communications happen through subscriptions. Each actor has a subscribe
method that takes an arbitrary callable object, usually a proxy method. Those callables are called with new information whenever there's a state change.
List monitor actors also have a subscribe_to
method that calls the callable on every update, with information about one specific object in the response (e.g., every update about an Arvados node with a specific UUID).
Thanks to this pattern, it's rare for our code to directly use the Future objects that are returned from proxy methods. Instead, the different actors send messages to each other about interesting state changes. The 30,000-foot overview of the program is:
- Start the list monitor actors
- Start the NodeManagerDaemonActor. It subscribes to those monitors.
- The daemon creates different compute node actors to manage different points of the node's lifecycle, and subscribes to their updates as well.
- When the daemon creates a ComputeNodeActor, it subscribes that new actor to updates from the list monitor about the underlying cloud and Arvados data.
See launcher.py
, and the update_cloud_nodes
and update_arvados_nodes
methods in daemon.py
.
Test Mocks¶
The subscription pattern also simplifies testing with mocks. Each test starts at most one actor. We send messages to that actor with mock data, and then check the results through a mock subscriber. As long as you can commit to particular message semantics, this makes it possible to write well-isolated, fast tests. testutil.py
provides rich mocks for different kinds of objects, as well as a Mixin class to help test actors.
Driver wrappers¶
When we start a compute node, we need to seed it with information from the associated Arvados node object. The mechanisms to pass that information will be different for each cloud provider. To accommodate this, there are driver classes under arvnodeman.computenode
that handle the translation. They also proxy public methods from the "real" libcloud driver, so except for the create_node
method, you can usually use libcloud's standard interfaces on our custom drivers.
Configuration¶
doc/ec2.example.cfg
has lots of comments describing what parameters are available and how they behave. Bear in mind that settings in Cloud and Size sections are specific to the provider named in the Daemon section.
doc/local.example.cfg
lets you run a development node manager, backed by libcloud's dummy driver and your development Arvados API server. Refer to the instructions at the top of that file.
Updated by Brett Smith about 10 years ago · 9 revisions