Containers API » History » Revision 10
Revision 9 (Tom Clegg, 06/05/2015 07:05 PM) → Revision 10/64 (Tom Clegg, 06/08/2015 08:05 PM)
{{>TOC}} h1. Jobs API (DRAFT) Clients control JobRequests. The system controls Jobs, and assigns them to JobRequests. When the system has assigned a Job to a JobRequest, anyone with permission to read the JobRequest also has permission to read the Job. A JobRequest describes job _constraints_ which can have different interpretations over time. For example, a JobRequest with a @{"kind":"git_tree","commit_range":"abc123..master",...}@ mount might be satisfiable by any of several different source trees, and this set of satisfying source trees can change when the repository's "master" branch is updated. A Job is an unambiguously specified process. Git trees, data collections, docker images, etc. are specified using content addresses. A Job serves as a statement of exactly _what computation will be attempted_ and, later, a record of _what computation was done_. h2. Use cases h3. Preview Tell me how you would satisfy job request X. Which pdh/commits would be used? Is the satisfying job already started? finished? h3. Submit a previewed existing job I'm happy with the already-running/finished job you showed me in "preview". Give me access to that job, its logs, and [when it finishes] its output. h3. Submit a previewed new job I'm happy with the new job the "preview" response proposed to run. Run that job. h3. Submit a new job (disable reuse) I don't want to use an already-running/finished job. Run a new job that satisfies my job request. h3. Submit a new duplicate job (disable reuse) I'm happy with the already-running/finished job you showed me in "preview". Run a new job exactly like that one. h3. Select a job and associate it with my JobRequest I'm not happy with the job you chose, but I know of another job that satisfies my request. Assuming I'm right about that, attach my JobRequest to the existing job of my choice. h3. Just do the right thing without a preview Satisfy job request X one way or another, and tell me the resulting job's UUID. h2. JobRequest/Job life cycle Illustrating job re-use and preview facility: # Client CA creates a JobRequest JRA with priority=0. # Server creates job JX and assigns JX to JRA, but does not try to run JX yet because max(priority)=0. # Client CA presents JX to the user. "We haven't computed this result yet, so we'll have to run a new job. Is this OK?" # Client CB creates a JobRequest JRB with priority=1. # Server assigns JX to JRB and puts JX in the execution queue with priority=1. # Client CA updates JRA with priority=2. # Server updates JX with priority=2. # Job JX starts. # Client CA updates JRA with priority=0. (This is the "cancel" operation.) # Server updates JX with priority=1. (JRB still wants this job to complete.) # Job JX finishes. # Clients CA and CB have permission to read JX (ever since JX was assigned to their respective JobRequests) as well as its progress indicators, output, and log. h2. "JobRequest" schema |Attribute|Type|Description|Discussion|Examples| |uuid, owner_uuid, modified_by_client_uuid, modified_by_user_uuid|string|Usual Arvados model attributes||| | |created_at, modified_at|datetime|Usual Arvados model attributes||| | |name|string|Unparsed||| | |description|text|Unparsed||| | |job_uuid|uuid|The job that satisfies this job request.| Can be null if a suitable job has not yet been found or queued. Assigned by the system: cannot be modified directly by clients. If null, it can be changed by the system at any time. If not null, it can be reset to null by a client _if priority is zero_.|| | |mounts|hash|Objects to attach to the container's filesystem and stdin/stdout. Keys starting with a forward slash indicate objects mounted in the container's filesystem. Other keys are given special meanings here.| We use "stdin" instead of "/dev/stdin" because literally replacing /dev/stdin with a file would have a confusing effect on many unix programs. The stdin feature only affects the standard input of the first process started in the container; after that, the usual rules apply.| <pre>{ "/input/foo":{ "kind":"collection", "portable_data_hash":"d41d8cd98f00b204e9800998ecf8427e+0" }, "stdin":{ "kind":"collection_file", "uuid":"zzzzz-4zz18-yyyyyyyyyyyyyyy", "path":"/foo.txt" }, "stdout":{ "kind":"regular_file", "path":"/tmp/a.out" } }</pre>| | |runtime_constraints|hash|Restrict |runtime_permissions|hash|Restrict the job's access to the outside world (apart from its explicitly stated inputs and output). Each key is the name of a capability, like "internet" or "API" or "clock". The corresponding value is @true@ (the capability must be available in the job's runtime environment) or @false@ (must not) or a value or an array of two numbers indicating an inclusive range. not). If a key is omitted, availability of the corresponding capability is acceptable but not necessary.|This is a generalized version of "enforce purity restrictions": it is not a claim that the job will be pure. Rather, it helps us control and track runtime restrictions, which can be helpful when reasoning about whether a given job was pure. In the most basic implementation, no capabilities are defined, and the only acceptable value of this attribute is the empty hash. (TC)Should (TC)This name isn't great, and conflicts with the "readable/writable" kind of permissions. Perhaps something along the lines of capabilities or interfaces? (TC)Is this structure be extensible like mounts?| the same type of feature as requesting memory/disk/cores? Or are those resources assumed not to affect reproducibility?| <pre> <pre>{}</pre>| { "ram":12000000000, "vcpus":[1,null] }</pre>| | |container_image|string|Docker |docker_image|string|Docker image repository and tag, docker image hash, collection UUID, or collection PDH.||| PDH.|(TC)Could this be just another mount point, with target "docker_image"?|| | |environment|hash|environment variables and values that should be set in the container environment (docker run --env). This augments and (when conflicts exists) overrides environment variables given in the image's Dockerfile.||| | |cwd|string|initial working directory, given as an absolute path (in the container) or a path relative to the WORKDIR given in the image's Dockerfile. The default is @"."@.||<pre>"/tmp"</pre>| | |command|array of strings|Command to execute in the container. Default is the CMD given in the image's Dockerfile.| To use (TC)Possible to specify a UNIX pipeline, pipe, like "echo foo | tr f b", or to interpolate environment variables, make sure your container image has a shell, and b"? Any shell variables supported? Or do you just use a command like @["sh","-c","echo $PATH | wc"]@.|| wc"]@ if you want a shell?|| | |output_path|string|Path to a directory or file inside the container that should be preserved as job's output when it finishes.|This path _must_ be, or be inside, one of the mount targets. For finishes.|For best performance, point output_path to a writable collection mount.|| | |priority|number|Higher number means spend more resources (e.g., go ahead of other queued jobs, bring up more nodes). Zero means a job should not be run on behalf of this request. (Clients are expected to submit JobRequests with zero priority in order to prevew the job that will be used to satisfy it.)||@0@, @1000.5@, @-1@| | |expires_at|datetime|After this time, priority is considered to be zero. If the assigned job is running at that time, the job _may_ be cancelled to conserve resources.||@null@, @2015-07-01T00:00:01Z@| h2. "Job" schema |Attribute|Type|Description|Discussion|Examples| | |uuid, owner_uuid, created_at, modified_at, modified_by_client_uuid, modified_by_user_uuid|string|Usual Arvados model attributes||| | |state|string||| <pre> "Queued" "Running" "Cancelled" "Failed" "Complete" </pre>| | |started_at, finished_at, log||Same as current job||| | |environment|hash|Must be equal to a JobRequest's environment in order to satisfy the JobRequest.|(TC)We could offer a "resolve" process here like we do with mounts: e.g., hash values in the JobRequest environment could be resolved according to the given "kind". I propose we leave room for this feature but don't add it yet.|| | |cwd, command, output_path|string|Must be equal to the corresponding values in a JobRequest in order to satisfy that JobRequest.||| | |mounts|hash|Must contain the same keys as the JobRequest being satisfied. Each value must be within the range of values described in the JobRequest _at the time the Job is assigned to the JobRequest._||| | |runtime_constraints|hash|The |runtime_permissions|hash|The types of access to the outside world (apart from its explicitly stated inputs and output) available to the job when it runs/ran.| Permission/access types will change over time and it may be hard/impossible to translate old types to new. Such cases may cause old Jobs to be inelegible for assignment to new JobRequests.|| | |output|string|Portable data hash of the output collection.||| | |-pure-|-boolean-|-The job's output is thought to be dependent solely on its inputs, i.e., it is expected to produce identical output if repeated.-| We want a feature along these lines, but "pure" seems to be a conclusion we can come to after examining various facts -- rather than a property of an individual job execution event -- and it probably needs something more subtle than a boolean.|| | |container_image|string|Portable |docker_image_pdh|string|Portable data hash of a collection containing the docker image used to run the job.|(TC) *If* docker image hashes can be verified efficiently, we can use the native docker image hash here instead of a collection PDH.|| | |progress|number|A number between 0.0 and 1.0 describing the fraction of work done.| If (TC)How does this relate to child tasks? E.g., is a job submits jobs of its own, it should supposed to update this itself as its own progress as the child jobs progress/finish.|| tasks complete?|| | |priority|number|Priority assigned by the system, taking into account the priorities |priority|number|Highest priority of all associated JobRequests.||| JobRequests||| h2. Mount types The "mounts" hash is the primary mechanism for adding data to the container at runtime (beyond what is already in the container image). Each value of the "mounts" hash is itself a hash, whose "kind" key determines the handler used to attach data to the container. |Mount type|@kind@|Expected keys|Description|Examples|Discussion| | |Arvados data collection|@collection@| @"portable_data_hash"@ _or_ @"uuid"@ _may_ be provided. If not provided, a new collection will be created. This is useful when @"writable":true@ and the job's @output_path@ is (or is a subdirectory of) this mount target. @"writable"@ may be provided with a @true@ or @false@ to indicate the path must (or must not) be writable. If not specified, the system can choose. @"path"@ may be provided, and defaults to @"/"@.| At job startup, the target path will have the same directory structure as the given path within the collection. Even if the files/directories are writable in the container, modifications will _not_ be saved back to the original collections when the job ends.| <pre> { "kind":"collection", "uuid":"...", "path":"/foo.txt" } { "kind":"collection", "uuid":"..." } </pre>| | |Git tree|@git_tree@| One of {@"git-url"@, @"repository_name"@, @"uuid"@} must be provided. One of {@"commit"@, @"revisions"@} must be provided. "path" may be provided. The default path is "/".| At job startup, the target path will have the source tree indicated by the given revision. The @.git@ metadata directory _will not_ be available: typically the system will use @git-archive@ rather than @git-checkout@ to prepare the target directory. If a value is given for @"revisions"@, it will be resolved to a set of commits (as desribed in the "ranges" section of git-revisions(1)) and the job request will be satisfiable by any commit in that set. If a value is given for @"commit"@, it will be resolved to a single commit, and the tree resulting from that commit will be used. @"path"@ can be used to select a subdirectory or a single file from the tree indicated by the selected commit. Note that multiple commits can resolve to the same tree: for example, the file/directory given in @"path"@ might not have changed between commits A and B.| <pre> { "kind":"git_tree", "uuid":"zzzzz-s0uqq-xxxxxxxxxxxxxxx", "commit":"master" } { "kind":"git_tree", "uuid":"zzzzz-s0uqq-xxxxxxxxxxxxxxx", "commit_range":"bugfix^..master", "path":"/crunch_scripts/grep" } </pre>|The resolved mount (found in the Job record) will have only the "kind" key and a "blob" or "tree" key indicating the 40-character hash of the git tree/blob used.| | |Temporary directory|@tmp@| @"capacity"@: capacity (in bytes) of the storage device| None| At job startup, the target path will be empty. When the job finishes, the content will be discarded. This will be backed by a memory-based filesystem where possible.| <pre> { "kind":"tmp", "size":" } </pre>|| </pre>| (TC)Should add a "max size" feature, to help memfs-backed implementations.| | h2. Permissions Users own JobRequests but the system owns Jobs. Users get permission to read Jobs by virtue of linked JobRequests. h2. API methods Changes from the usual REST APIs: h3. arvados.v1.job_requests.create and .update These methods can fail when objects referenced in the "mounts" hash do not exist, or the acting user has insufficient permission on them. h3. arvados.v1.job_requests.update The @job_uuid@ attribute is special: * It cannot be changed from null to non-null by a regular client. * It cannot be changed from non-null to null by system processes. * It _can_ be reset from non-null to null by the system _during a client-initiated update transaction that modifies attributes other than @state@ and @priority@._ Apart from @job_uuid@, updates are restricted by the current @state@ of the job request. * When @state="Preview"@, all attributes can be updated. * When @state="Request"@, only @priority@ and @state@ can be updated. * When @state="Done"@, no attributes can be updated. @state@ cannot be null. The following state transitions are the only ones permitted. * Preview → Request * Preview → Done * Request → Done h3. arvados.v1.jobs.create and .update These methods are not callable except by system processes. h3. arvados.v1.jobs.progress This method permits specific types of updates while a job is running: update progress, record success/failure. Q: [How] can a client submitting job B indicate it shouldn't run unless/until job A succeeds? h2. Debugging Q: Need any infrastructure debug-logging controls in this API? Q: Need any job debug-logging controls in this API? Or just use environment vars? h2. Scheduling and running jobs Q: When/how should we implement If two users submit identical pure jobs and ask to reuse existing jobs, whose token does the job get to use? * Should pure jobs be run as a hooks pseudo-user that is given read access to the relevant objects for futures/promises: e.g., "run job Y when the duration of the job? (This would make it safer to share jobs X0, X1, -- see #5823) Q: If two users submit identical pure jobs with different priority, which priority is used? * Choices include "whichever is greater" and X2 have finished"? "sum". Q: If two users submit identical pure jobs and one cancels -- or one user submits two identical jobs and cancels one -- does the work stop, or continue? What do the job records look like after this?