Service containers¶
Concept: Containers launched via the Crunch infrastructure, but provide a network port that things can connect to.
Arvados epic: https://dev.arvados.org/issues/17207
Uses cases¶
- Applications providing an API
- a bunch of data needs to be loaded into RAM before it can be used, queried, or computed on
- e.g. large language models, databases, function-as-a-service
- Makes sense when the time spent on any given query is much much smaller than the loading time
- User facing web applications
- e.g. Integrative Genomics Viewer (IGV), Jupyter notebooks
- Also includes web applications that interact with an API (first bullet)
- Cluster maintenance services
- Services that react to stuff happening on the cluster, such as kicking off a workflow when a collection appears in a certain project, or checking projects for metadata conformance. These things currently run outside of the cluster, but could may benefit from Arvados features if they were also managed by the cluster.
- This doesn't strictly require the ability to expose web services, but would benefit from other tweaks to better accommodate long-lived containers.
Fundamental requirement¶
Crunch launches a container and makes it possible for an outside client to communicate with the container.
Discussion points¶
Who can communicate with the container¶
Exposing services primarily to outside clients vs communication between containers on the inside have different requirements.
Outside: Must be able to connect from outside. Because containers are on a private network, some kind of proxying or network address translation (NAT) is required.
Inside: Assuming containers are on the same private network and can route to each other, they can communicate directly. Need to be able to discover how to contact other containers. (Might even want a way of declaring exactly containers can connect to which other containers).
HTTP only, or arbitrary TCP connections?¶
HTTP only: Can proxy HTTP requests using wildcard DNS and "Host:" headers, we have machinery and operational experience doing that already. Can apply Arvados authentication to requests, e.g. setting a cookie with an Arvados token so the client can only communicate with containers that have read access to. Cannot host services that don't use HTTP.
Arbitrary TCP: Would need to apply NAT or connection tunneling to connections on an arbitrary external port that is associated with the container. We don't currently have machinery to do this. Authentication is left up to the service. Can host services that have their own protocols, such as postgresql or ssh.
Container shell uses connection tunneling, it makes a HTTP connection and doing a connection upgrade to SSH. This requires special cooperation between arvados-client and ssh, which doesn't generalize.
Internal-only connections (between containers) may be a bit easier to orchestrate arbitrary TCP connections without tunneling. Authentication is still left up to the container, or requires fiddling with firewall rules on the fly to control who can access the container.
Redundancy with other platforms¶
Kubernetes orchestrates services. This feature overlaps with kubernetes. We don't have the resources to compete with Kubernetes. However, with Arvados as a data analytics platform where scheduling and running code is a core feature, a carefully scoped feature for hosting services could give us some very significant new capability relative to the amount of work.
Long lived containers¶
We might want to limit certain kinds of logging such as the stats from crunchstat, hoststat, and arv-mount, because a container running for weeks will accumulate a lot of logs.
Container naming¶
If you start a service, use it for a bit, shut it down, then submit a new container request to bring it back up again, it will get a new UUID. This is a problem if a new session represents the same service and people have it bookmarked, written into scripts, etc.
It would be great to be able to assign a friendly hostname to a running container. Example: instead of https://zzzzz-xvhdp-iiiiiiiiiiiiiii.svc.zzzzz.arvadosapi.com/ you could go to https://ollama.svc.zzzzz.arvadosapi.com/
Initial proposal¶
1. container request zzzzz-xvhdp-iiiiiiiiiiiiiii submitted with
{ runtime_constraints: { expose_http_from: 80 } }
This means "expose the HTTP service running inside the container on port 80". Must be an unencrypted HTTP endpoint.
This creates a corresponding container zzzzz-dz642-iiiiiiiiiiiiiii
2. For running containers with "expose_http_from", a user can visit a URL proxied by controller:
https://zzzzz-xvhdp-iiiiiiiiiiiiiii.svc.zzzzz.arvadosapi.com/foo?baz&api_token=v2/foo/bar
This does a cookie-setting-redirect to:
https://zzzzz-xvhdp-iiiiiiiiiiiiiii.svc.zzzzz.arvadosapi.com/foo?baz
On each request, the proxy checks the API token to determine if the user has read access to the container request.
The proxy also adds X-Arvados-User-UUID to the request.
If the container is in a project shared with the anonymous user, no API token is required.
3. Controller forwards the request to the container and returns the response using the mechanism that has been developed for container shell and container logs.
Visiting the container request on workbench give an easy to click link to "https://zzzzz-xvhdp-iiiiiiiiiiiiiii.svc.zzzzz.arvadosapi.com/?api_token=v2/foo/bar"
Engineering meeting notes¶
Considering the notion of a service container (a long-lived container process) and a container that is available over HTTP to be distinct features.
Service container request¶
Service containers can only reuse running containers.
Need to double check container cancellation behavior, we might want to be able to do a gracious shutdown.
Should be a new top level database field of containers and container requests.
HTTP endpoints¶
Mulling over the idea of being able to connect to arbitrary ports but also have named, published endpoints.
The default name is uuid followed by the port which will try to proxy HTTP requests to the container at that port:
https://zzzzz-xvhdp-iiiiiiiiiiiiiii-1234.containers.zzzzz.arvadosapi.com/
Arbitrary ports are only available to the user that own the container.
Should be a new top level database field of containers and container requests.
Published endpoints have access control:
- private (owner only)
- public (anybody)
Future version could have more access level in between:
- can_manage
- can_write
- can_read
Something like:
"published_endpoints": { "80": { "access": "public", "label": "Tom Norton's donut service" } }
Published endpoints get listed in workbench. Non-published endpoints can be connected are considered private but can still be connected to by the user if the are actually open on the container.
hostnames are first come first served and owned by the user until the link is deleted (by the user or admin).
link_class: hostname owner_uuid: user or project head_uuid: container request name: hostname properties: { port: 80 }
for all links, 'name' must be unique where 'link_class=hostname'
This makes it possible to access containers with a "friendly" name:
https://friendlyname.containers.zzzzz.arvadosapi.com
API server should validate that the hostname in "name" is valid for DNS and doesn't contain periods, and reject if not.
This scheme for links can be used to assign friendly names to collections in the future.
Updated by Peter Amstutz 10 days ago · 14 revisions