Project

General

Profile

Actions

Auto-discovery

See #18256

Goals

  • remove need for config file presence on every node
  • autodiscovery of arvados services

Status quo:

  • every node that runs a permanent arvados service needs (a subset of) the config.yml file present on the filesystem
  • for simplicity, we recommend that the entire config file is present on each such machine
  • the non-private parts of the config are exposed on controller via an api

Problems with the status quo:

  • it can be difficult to keep the config file in sync across nodes (requires configuration management to ensure it)
  • it is not ideal from a security prespective that the entire config file (with all its secrets) is installed on each node, even when only a subset is required. This is particularly the case when the node is not 100% trusted, e.g. a compute node image with local keepstore (cf. #16347) when Docker is the runner.

Desired functionality:

  • arvados services can request configuration values from controller (by key)
  • for the secret parts of the config, some form of authentication is required
  • arvados services can register themselves with controller. E.g. when arvados-ws is started up, it would discover the controller (see below) and make itself known as an arvados-ws service and request the necessary configuration keys.

Discovery process:

We could adopt something similar to how Puppet et al do discovery: a well-known dns name that is polled until it is reachable, after which the necessary config is retrieved and cached locally. E.g. each service could try to reach "arvados" on port 443, though we'd probably want to do that with ARVADOS_API_HOST_INSECURE set. That seems sucky.

So how about this: the client does an http "join" request to 'arvados' on port 80 which responds with the fqdn of controller (https, just a 301)? That way we get transport level security and automatic discovery. The payload is a json object that has these fields: fqdn, local ip address -- which one?, service name e.g. arvados-ws, and optionally a pre-shared key (PSK). The client keeps repeating the join request every few seconds, polling, until it is accepted or rejected.

Controller has an API to list/accept/remove "join" requests, again a la Puppet. An arvados-server command could give cli access to that command.

Controller can issue a PSK it accepts for automatic joining (handy in tests, etc). A service could be given that PSK on startup (e.g. via an env var).

Controller could have a "discovery" mode where it automatically accepts all "join" requests (this could be very handy for automated testing).

When controller is instructed to "accept" a join request, controller issues a service secret to the service, which writes it to a file on disk (in /etc/arvados/ ?). Service then requests relevant config. It should probably be cached on disk -- we don't want to make every service dependent on the availability of the controller at all times. The service could occasionally try to refresh the config if the cache copy is too old, but fall back to the cached copy if it can't reach controller.

Puppet discovery process (for background reference):

Puppet does this by giving the server its own CA, which generates its own cert with

```
X509v3 Subject Alternative Name:
DNS:puppet, DNS:$fqdn, DNS:puppet.$domain
```

On first connect, the client generates a CSR and private key, gets the CA public cert and sends the CSR to the server. Administrator on the server approves CSR, until then the client polls, and when the CSR is approved, the signed cert is sent back to client, which then uses that to communicate with the master.

For more information see https://www.masterzen.fr/2010/11/14/puppet-ssl-explained/

Future work (not part of this design):

  • controller automatically updates the config and adds/removes services as they register/deregister with it

Updated by Ward Vandewege over 2 years ago · 1 revisions