Project

General

Profile

Storing and Organizing Data » History » Revision 32

Revision 31 (Tom Clegg, 05/07/2014 02:19 PM) → Revision 32/33 (Tom Clegg, 05/07/2014 02:27 PM)

h1. Storing and Organizing Data 

 Rough demo outline 

 # Automatic ingest from a POSIX directory to Keep 
 #* Ingestor's access to staging area (could be remote NFS or sshfs mount) is arranged ahead of time 
 #* 3rd-party's access to staging area is arranged ahead of time 
 #* Ingestor runs in a screen session. Command line parameters provide project (group/folder) ID and a tag that indicates "this is for *me* to ingest". 
 #* Someone ("3rd-party") uploads some files to the staging area via SFTP or whatever 
 #* 3rd-party does an API call to "ingest-notify app". This might be a short bash script culminating in a curl command. In the API call, the 3rd-party provides a label (e.g., a sample ID) and a list of files, checksums, and an arbitrary "properties" hash containing whatever the 3rd-party wants. 
 #* Ingest-notify app generates a "data in staging area is ready to ingest" event via API server. 
 #* Ingestor waits of a "data in staging area is ready to ingest" notification via API server. 
 #* Ingestor reads the data from the staging area and writes it into Keep (creates one collection per API call made by 3rd-party). 
 #* Ingestor (or arv-put on behalf of ingestor?) makes API calls while working, to indicate progress (bytes done/todo). @arvados.v1.logs.create(object_uuid=uuid_of_upload_object)@ 
 #* In Workbench the imported Datasets appear as Collections in the designated project 
 #* After data has been copied into Keep, ingestor deletes the files from the staging area (if @--delete-after@ flag given). 
 ... 
 # My data gets into the right project as specified by the uploader (API call) 
 #* How is the staging-area ↔ project mapping specified, and how/where is it encoded/stored? 
 ... 
 # Subscribe to notifications (by email and/or Workbench dashboard): when files start/finish uploading; when files are shared with customer; when files are downloaded by third party 
 #* For now, use existing Logs table + automatic logging of create/update/delete operations + "progress" event from arv-put (see above) 
 #* "Show project" page shows recent activity: one progress bar for each unfinished upload, one entry for each start/finish event. 
 #* Dashboard page shows recent activity from all of my projects. 
 ... 
 # Move/copy collections between projects (Project RX1234, or Customer X’s files), tag them in destination project with the appropriate string (e.g., sample ID) -- defaulting to existing tag used in source project (e.g., provided at time of upload). 
 #* UI for presenting Groups as Projects/Folders: create, view, rename, share, delete 
 #* UI for copying/moving objects between folders 
 #* How to avoid confusion about "is this one object in two places, or are there two objects?" Note GDocs has a bit of both, "My Drive" / "Shared with me" vs. regular folders 
 ... 
 # Share project with other users/groups 
 ... 
 # “Anyone with this secret link can view/download” mode. Enable, disable, change magic link. Use cases: browser + “wget -r”. 
 #* Perhaps the secret in the secret link is an ApiClientAuthorization token, belonging to the person creating the link, scoped to a single project/collection 
 #* How do we implement "Anonymous user, not logged in"? 
 ... 
 # See log/overview of who has accessed your shared data (incl. “anonymous user” if using secret-link-to-share); when shared/unshared; when each upload started/finished -- for a single project, and across all projects 
 ... 
 # Pilot alternate Workbench group/dashboard view 
 ... 


 h2. Retrospective notes 

 * Went well - still some merge-race at the end 
 * Lots of branches going in 
 * Not a lot of merge conflicts 
 * Big spec change (rejecting "ingestor" story in favor of future "remote arv-put") 
 * Some in-sprint deployment dependency stuff (crunch+docker, websockets) 
 * Please tag commits with story numbers. Use Currently "refs #1234" for merges. (Use "refs #1234" for individual commits too?) 
 * Consider extracting a task into a story if it grows into its own thing (e.g., token handling as part of collection sharing) 
 * In sprint review, include 2 more agenda items: summary of things not done + high-level overview of next sprint works well because redmine understands it.