Feature #21074
Updated by Peter Amstutz 23 days ago
1. Add "collection_uuid" to workflows table
2. Update API revision
3. When collection_uuid is set, workflows controller rejects updates
4. When a collection with @type: workflow@ is updated, search for workflows with the corresponding collection_uuid and synchronizes name/description/definition/owner_uuid
5. The group contents API adds support for @include=collection_uuid@
6. Workflow query filter add support joining on the collection in order to query on properties, e.g. should be possible to do @[["collection.properties.category", "=", "WGS"]]@ so clients can query/filter workflow records by collection properties.
7. deleting collection should delete linked workflow record
Need to define exactly how to assemble the definition. The idea is to put the input and output sections in properties on the collection, then assembling the wrapper is pretty straightforward (looks like we need hints/requirements as well). This is the most relevant code in arvados-cwl-runner:
<pre>
wrapper = {
"class": "Workflow",
"id": "#main",
"inputs": newinputs,
"outputs": [],
"steps": [step]
}
for i in main["inputs"]:
step["in"].append({
"id": "#main/step/%s" % shortname(i["id"]),
"source": "#main/%s" % shortname(i["id"])
})
for i in main["outputs"]:
step["out"].append({"id": "#main/step/%s" % shortname(i["id"])})
wrapper["outputs"].append({"outputSource": "#main/step/%s" % shortname(i["id"]),
"type": i["type"],
"id": "#main/%s" % shortname(i["id"])})
wrapper["requirements"] = [{"class": "SubworkflowFeatureRequirement"}]
if main.get("requirements"):
wrapper["requirements"].extend(main["requirements"])
if hints:
wrapper["hints"] = hints
# Schema definitions (this lets you define things like record
# types) require a special handling.
for i, r in enumerate(wrapper["requirements"]):
if r["class"] == "SchemaDefRequirement":
wrapper["requirements"][i] = fix_schemadef(r, main["id"], tool.doc_loader.expand_url, merged_map, jobmapper, col.portable_data_hash())
doc = {"cwlVersion": "v1.2", "$graph": [wrapper]}
if git_info:
for g in git_info:
doc[g] = git_info[g]
</pre>
h2. Old discussion
Idea: the "workflow" table is an odd duck. It stores a single data string in the "definition" field, but doesn't support properties, versioning, trashing, etc. We want these things for workflows but we don't want to duplicate all the logic. It would be better if we could just store workflows in collections.
However, eliminating the "workflows" API endpoint would be disruptive, as Workbench and arvados-cwl-runner both rely on it. (We can synchronize workbench updates but people frequently use older versions of arvados-cwl-runner with newer API servers).
Starting from Arvados 2.6.0, @--create-workflow@ works by creating a collection (of @type: workflow@) with all the workflow files, and then only puts a minimal wrapper workflow into the @definition@ field of the workflow record. The wrapper consists of a single step workflow which runs the real workflow from keep (using a @keep:@ reference).
Workbench needs the following:
* The entry point (currently, it writes @definition@ to a generic @workflow.json@ file and runs that)
* The schema for inputs / outputs (currently extracted from @definition@)
* Metadata such as git commit information (currently extracted from @definition@)
* The actual workflow definition (currently extracted from @definition@ by looking for a single step which with a @keep:@ reference)
It seems pretty straightforward that we should create container requests to run CWL directly from the @type: workflow@ collection, because that's now 90% how it works already.
In other words, we probably have enough in the Collection record already to identify and launch workflows without any additional support from a-c-r (but requiring a bit of extra elbow grease in Workbench).
I think there's two main questions to answer:
# Should Workbench be expected to interact deeply with the underlying CWL, or should we copy all the information we expect workbench to need into properties? (at least one Typescript library for interacting with CWL does exist)
# What do we do with the legacy workflows endpoint? We have at least one user which launches workflows by workflow UUID which would be interrupted if the workflow
endpoint just went away.
Also: what to do with @template_uuid@
h3. Points that came up in engineering discussion 2025-03-12
* How much can we disrupt user processes around workflows? Does @create-workflow@/ @update-workflow@ with old a-c-r version need to work indefinitely? What about launching workflows using @arvwf:@?
* Does a workflow created by a new a-c-r need to be runnable with @arvwf:@ by an old one?
* Should @template_uuid@ be updated automatically by migration?
* Is it better to make the workflows API virtual, or phase it out?
** Possible virtual API: workflow record has its columns scrubbed and it is just a pointer to a collection. Create/update modifies the underlying collection, "get" fetches collection record and returns fields as-is, while synthesizing a "definition" field based on the collection properties (CWL inputs/outputs).
** Alternately: maybe the "workflows" table can be maintained as-is while using collections for workflows is built out, and then users are encouraged to migrate away from using "workflows" table identifiers (by printing warnings and stuff) so it can be phased out over several versions?
h3. Another migration idea (follow up to eng discussion)
Add a @collection_uuid@ field to the workflow, which is the collection with the workflow definition and all the metadata.
If @collection_uuid@ is set, then the workflow is linked to a collection. Once set, @collection_uuid@ cannot be changed. Subsequently, the workflow @name@, @description@, and @definition@ are synchronized with the collection.
For old versions of arvados-cwl-runner that do not set @collection_uuid@, they will see no change to how workflow records work.
New version of arvados-cwl-runner will create the collection with the workflow files (which is what it does already) and then create the workflow with @collection_uuid@ set. To update the workflow, it only need to update the collection associated with the workflow.
h4. If @collection_uuid@ is empty
The behavior of the workflow API is exactly the same.
h4. If @collection_uuid@ is not empty
The @name@ and @description@ fields are synchronized with the collection record; updating the collection record updates the workflow record. This means updating a collection needs to check for a linked workflow and update the workflow record in the same transaction.
The @definition@ should be synthesized from metadata stored on the collection (inputs/outputs/requirements which are all things that Workbench need to have on hand to launch workflows already). When the collection is updated, the synchronization method updates constructs a new value for @definition@ which is set on the workflow record.
It's probably simpler if this only goes in one direction, e.g. if @collection_uuid@ is non-empty then the @workflow@ record can no longer be updated through the API directly, but only by updating the backing collection which indirectly updates the workflow record.
The group contents API adds support for @include=collection_uuid@ so that clients can fetch both the workflow record and associated collection record in the same API request.
Workflow query filter add support joining on the collection in order to query on properties, e.g. should be possible to do @[["collection.properties.category", "=", "WGS"]]@ so clients can query/filter workflow records by collection properties.
@owner_uuid@ is required to match between the workflow and the collection. Changing @owner_uuid@ on the linked collection changes it for the workflow as well.
If the linked collection record is not accessible (e.g. it is trashed, deleted, or forbidden) the workflow should not be visible either. Deleting the collection record should delete the linked workflow record.
If the workflow collection included something like @arv:depends@ (#22565) then copying/moving could helpfully copy/move dependencies along with it, but that's not directly in scope.
Finally, for @template_uuid@ container requests, we continue to link to the workflow record by uuid, but add a new property @arv:workflow_pdh@ so we know precisely which version of the code was run.
h4. Things I like about this solution:
Old clients get the same behavior.
New clients can change their behavior incrementally.
We can phase in support in Workbench, all code will continue to query for workflow records, but can be incrementally migrated over to using group contents (for @include=collection@), extract properties from the linked workflow record, list past versions, and so forth can be implemented for workflow records that use @collection_uuid@.