Project

General

Profile

Actions

Idea #3414

closed

[Keep] Keep Server accepts list of blocks to pull from other servers

Added by Misha Zatsman over 10 years ago. Updated about 10 years ago.

Status:
Resolved
Priority:
Normal
Assigned To:
Tim Pierce
Category:
Keep
Target version:
Start date:
08/21/2014
Due date:
Story points:
2.0

Description

The keep server should accept a json file at PUT /pull

This file will contain an ordered list of of block locators for blocks to pull from other keep servers.

For each block, it will contain an ordered list of keep server addresses (host & port) to pull the blocks from.

When the keep server has resources available, it should request these blocks in order, trying the servers in the order given, and store the block returned in its own storage.

Each pull list received overwrites all previous pull lists.

This is written out at the bottom of:
https://arvados.org/projects/orvos-private/wiki/Keep_Design_Doc#Keep-Server

Prioritization:
  1. Accept a list of blocks-to-pull.
  2. Log the new list on stderr.
  3. Replace the current list in memory (if any) with the new list.
  4. Process the list by pulling data from remote servers.
Implementation notes:
  • Currently keepstore does not check whether the client's API token itself is valid. Therefore, when a keepstore retrieves blocks from other keepstores, it can use a made-up API token (like a string of fifty "x") in the Authorization header of its request (and, of course, use the same token to generate a permission hint). (see #3414#note-8 note #8 below for more detail)
  • Suggest: one goroutine is dedicated to owning the current pull list. Other goroutines use its channel to perform operations like "replace list with this one", "get list", "remove data block X from list". Possibly useful example: https://gobyexample.com/stateful-goroutines

Subtasks 2 (0 open2 closed)

Task #3697: Review 3414-keep-pull-handlerResolvedTom Clegg08/21/2014Actions
Task #3650: Implement PullHandlerResolvedTim Pierce08/21/2014Actions

Related issues

Blocks Arvados - Idea #3408: [Keep] Implement Production Data ManagerResolvedPeter Amstutz07/29/2014Actions
Actions #1

Updated by Tim Pierce over 10 years ago

  • Category set to Keep
  • Story points set to 2.0
Actions #2

Updated by Ward Vandewege over 10 years ago

  • Target version set to 2014-08-27 Sprint
Actions #3

Updated by Ward Vandewege over 10 years ago

  • Subject changed from Keep Server accepts list of blocks to pull from other servers to [Keep] Keep Server accepts list of blocks to pull from other servers
Actions #4

Updated by Tim Pierce over 10 years ago

  • Assigned To set to Tim Pierce
Actions #5

Updated by Tim Pierce over 10 years ago

In order for one keepstore to pull blocks from another keepstore, it will need to sign the locator. How can it do that? I don't think we resolved this question (at least, I don't seen an answer in my notes).

One possibility is to have Data Manager sign the block locators on the pull list with its own token, and have keepstore honor any permission hint that was signed with the data_manager_token. (The signature TTL would have to be long enough to take into account that Keep may not fetch blocks on the pull list right away.)

Other ideas? (A special secret GET command? An additional secret token? Not sure any of these give us something more we don't already get with the above approach.)

Actions #6

Updated by Tim Pierce over 10 years ago

Another issue to clarify: if keepstore tries every server on the list for a particular block, and every one of the servers fails, what should it do? There is no one to report the error to at that point.

I think the best way to proceed here is for keepstore to log the failure and to skip that block, on the assumption that one of two things has happened:

  1. All other servers have deleted the unreplicated blocks by the time keepstore gets around to the pull list. In this case there is nothing to replicate.
  2. If not, then presumably we will just try again anyway the next time Data Manager delivers a new pull list. There is no advantage in keepstore trying to do its own recovery.
Actions #7

Updated by Tom Clegg about 10 years ago

  • Description updated (diff)
Actions #8

Updated by Tom Clegg about 10 years ago

Tim Pierce wrote:

In order for one keepstore to pull blocks from another keepstore, it will need to sign the locator. How can it do that? I don't think we resolved this question (at least, I don't seen an answer in my notes).

Currently keepstore does not perform any validation on API tokens (just permission signatures). So this should work:

  • Generate a random string to use as an API token (either at startup or at time of use -- doesn't matter)
  • When pulling a block from a remote keepstore:
    • Generate a signature using {random-API-token, block-hash, now-plus-five-minutes1, the-usual-blob-signing-key}
    • Include the random API token in the Authorization header of the request.

This should pass validation at the remote keepstore.

1 This puts a limit on how much clock skew is acceptable before breaking everything. Clock skew is expected to be much less than 1 second in production.

There seems to be no advantage to using the Data Manager token rather than a random token. (Better to avoid using/sending that real secret anywhere it isn't needed.)

Actions #9

Updated by Tom Clegg about 10 years ago

  • Description updated (diff)
Actions #10

Updated by Tom Clegg about 10 years ago

Tim Pierce wrote:

  1. All other servers have deleted the unreplicated blocks by the time keepstore gets around to the pull list. In this case there is nothing to replicate.
  2. If not, then presumably we will just try again anyway the next time Data Manager delivers a new pull list. There is no advantage in keepstore trying to do its own recovery.

This sounds about right. The most important piece of information is not whether keepstore thinks it failed or succeeded to pull the blocks at time X, but rather whether it has the blocks at time Y when Data Manager retrieves a new index from keepstore. I think a reasonable algorithm would be:

  • Walk the pull list. Attempt each {block,server} entry one time.
  • When a pull succeeds, delete all pull list entries for that block.
  • After each attempt, check whether the pull list has been refreshed/replaced: if so, start from the top.
  • If the list is empty, block until a new list arrives.
  • Repeat ad infinitum.
Actions #11

Updated by Tom Clegg about 10 years ago

Priorities/requests:

  • In handlers:
    • Log when a "pull" request is received (at the top of PullHandler).
    • If the request is valid, authorized, etc., log the content of the new pull list.
  • In replicator:
    • rename Pull() to ReplaceList()
    • rename Dump() to GetList()
    • rename Replicator/replica.go to PullList/pull_list.go, and update comment "fulfills replication pull requests" → "maintains a list of pull requests".

At some point (not right now) I think we can rename the PullList to something more like todo_list, with operations like GetList, ReplaceList, and RemoveItem. (We need the same pattern for processing our "delete" list, and it would be nicer to reuse code rather than copy-paste-search-replace.)

Actions #12

Updated by Tom Clegg about 10 years ago

  • Status changed from New to In Progress
Actions #13

Updated by Tim Pierce about 10 years ago

New revision at 20fec88

Added logging for both successful and unsuccessful pull requests.

Renamed replicator.Replicator to pull_list.Manager, with interface:

  • pull_list.NewManager()
  • pull_list.SetList(newlist)
  • pull_list.GetList()
  • pull_list.Close()

Does this work for you? I don't want to split hairs on naming a package that may be renamed in a matter of weeks anyway, but I really expect confusion if both the request itself and the code package that manages it are referred to as a "pull list".

Actions #14

Updated by Tim Pierce about 10 years ago

Tom Clegg wrote:

At some point (not right now) I think we can rename the PullList to something more like todo_list, with operations like GetList, ReplaceList, and RemoveItem. (We need the same pattern for processing our "delete" list, and it would be nicer to reuse code rather than copy-paste-search-replace.)

Per our discussion this afternoon: as far as I can tell, most implementations of a "generic" pattern do use interface{} as a generic object type after all. e.g. http://corte.si/posts/code/go/golang-practicaly-beats-purity/

Actions #16

Updated by Tim Pierce about 10 years ago

Tom Clegg wrote:

Handy example: http://golang.org/src/pkg/container/list/list.go

That's a great example. In fact, it would probably make sense for us to use a container.List for the pull list and trash handler, since they're only going to treat the data as ordered lists anyway.

Actions #17

Updated by Tom Clegg about 10 years ago

Tim Pierce wrote:

New revision at 20fec88

Added logging for both successful and unsuccessful pull requests.

Great. I like that we actually get to see whether the request succeeded, unlike the GET and PUT logs!

We don't seem to test whether anything gets logged (or do I just not know where to look), which is probably OK for now. So instead I'll ask an embarrassing question: does %s here really output the whole data structure? (That seems so convenient I hesitate to believe it!)

  • log.Printf("%s %s: received %s\n", req.Method, req.URL, plist)

Renamed replicator.Replicator to pull_list.Manager, with interface:

  • pull_list.NewManager()
  • pull_list.SetList(newlist)
  • pull_list.GetList()
  • pull_list.Close()

Does this work for you? I don't want to split hairs on naming a package that may be renamed in a matter of weeks anyway, but I really expect confusion if both the request itself and the code package that manages it are referred to as a "pull list".

True, there are now three things called "pull list": the payload sent by Data Manager with a PUT /pull request; the interface to the thread-safe data structure; and the singleton instance of that data structure. This will reduce to two when we generalize the thread-safe data structure. Meanwhile, "singleton has same name as single-purpose data structure" seems livable.

To me the term "manager" looks confusing, and the misleading comment in keepstore.go suggests it's not just me:
  • // The pull list manager is responsible for pulling blocks from other
    // Keep servers to ensure replication. When Keep receives a new "pull
    // list" from Data Manager, the pull manager is responsible for fetching
    // blocks on the list.
    var pullmgr *pull_list.Manager
    

What we're actually creating here is a list storage unit with thread-safe access. It is not, in fact, responsible for pulling blocks from other Keep servers -- all it does is store a list.

Rather than flail on the package/module names, let's just change the comment so at least it doesn't come right out and lie to us. Something like:
  • // The pull list manager is a singleton pull list (a list of blocks
    // that the current keepstore process should be pulling from remote
    // keepstore servers in order to increase data replication) with
    // atomic update methods that are safe to use from multiple
    // goroutines.
    

BTW, is there a reason pull_list.go gets its own directory and has to be imported explicitly, while (for example) volume.go just sits next to keepstore.go and gets used automatically? Just wondering...

Thanks!

Actions #18

Updated by Tim Pierce about 10 years ago

Tom Clegg wrote:

Tim Pierce wrote:

New revision at 20fec88

Added logging for both successful and unsuccessful pull requests.

Great. I like that we actually get to see whether the request succeeded, unlike the GET and PUT logs!

We don't seem to test whether anything gets logged (or do I just not know where to look), which is probably OK for now. So instead I'll ask an embarrassing question: does %s here really output the whole data structure? (That seems so convenient I hesitate to believe it!)

fmt.Printf("%s", foo) will call the foo.String() method if it exists. So in the case of PullHandler fmt.Printf("%s %s", req.Method, req.URL) just produces PUT /pull.

To me the term "manager" looks confusing, and the misleading comment in keepstore.go suggests it's not just me:
  • [...]

What we're actually creating here is a list storage unit with thread-safe access. It is not, in fact, responsible for pulling blocks from other Keep servers -- all it does is store a list.

Rather than flail on the package/module names, let's just change the comment so at least it doesn't come right out and lie to us. Something like:
  • [...]

Updated the comment at 902e9eb. I think the main problem here is the comment getting a little ahead of the code -- the pull_list.Manager will be responsible for fetching blocks from remote Keeps, the implementation just isn't there yet. But agreed that the comments really should reflect the current state of the code rather than "where we hope this will be by the next sprint".

BTW, is there a reason pull_list.go gets its own directory and has to be imported explicitly, while (for example) volume.go just sits next to keepstore.go and gets used automatically? Just wondering...

There actually is! (At least some of the things I do have a reason!) pull_list is a package with its own namespace; volume is part of "package main" and is not. I'm not happy about the volume of code that's strewn around package "main" in keepstore and want to get cleaner boundaries between components as we go forward, so I intend to move more code (like volume.go) into their own packages as we go on.

Actions #19

Updated by Tim Pierce about 10 years ago

  • Status changed from In Progress to Resolved
  • % Done changed from 33 to 100

Applied in changeset arvados|commit:3204b60e04bf8613858bfc4a753d7114f6ca90a3.

Actions #20

Updated by Tom Clegg about 10 years ago

Tim Pierce wrote:

fmt.Printf("%s", foo) will call the foo.String() method if it exists. So in the case of PullHandler fmt.Printf("%s %s", req.Method, req.URL) just produces PUT /pull.

(per discussion) %s → %v for the "plist" printf, then LGTM.

Updated the comment at 902e9eb. I think the main problem here is the comment getting a little ahead of the code -- the pull_list.Manager will be responsible for fetching blocks from remote Keeps, the implementation just isn't there yet. But agreed that the comments really should reflect the current state of the code rather than "where we hope this will be by the next sprint".

Ah. I was (and still am) thinking it will be simpler and more flexible to use a separate goroutine for "working on the queue" -- then we can separate the "threadsafe shared block list" logic from the "how to pull a block" and "how to trash a block" logic.

There actually is! (At least some of the things I do have a reason!) pull_list is a package with its own namespace; volume is part of "package main" and is not. I'm not happy about the volume of code that's strewn around package "main" in keepstore and want to get cleaner boundaries between components as we go forward, so I intend to move more code (like volume.go) into their own packages as we go on.

Ah, I see. The (few) Go trees I've looked at seem to have a bunch of *.go files, each exporting a small number of names, which seems nice and pragmatic to me. If you export only one interface, can that also keep the namespace nice and clean, even if everything's in "main"?

Thanks!

Actions #21

Updated by Tim Pierce about 10 years ago

Tom Clegg wrote:

Tim Pierce wrote:

fmt.Printf("%s", foo) will call the foo.String() method if it exists. So in the case of PullHandler fmt.Printf("%s %s", req.Method, req.URL) just produces PUT /pull.

(per discussion) %s → %v for the "plist" printf, then LGTM.

Updated the comment at 902e9eb. I think the main problem here is the comment getting a little ahead of the code -- the pull_list.Manager will be responsible for fetching blocks from remote Keeps, the implementation just isn't there yet. But agreed that the comments really should reflect the current state of the code rather than "where we hope this will be by the next sprint".

Ah. I was (and still am) thinking it will be simpler and more flexible to use a separate goroutine for "working on the queue" -- then we can separate the "threadsafe shared block list" logic from the "how to pull a block" and "how to trash a block" logic.

Let's not get too far into the weeds here. I think we can do this cleanly with a simpler interface. Suppose the goroutine that processes the list looks something like this:

func (w *WorkList) listen() {
    for item := range w.InputChannel {
         w.Process(item)
    }
}

go w.listen()

... then all of the logic for "what do I do with an item on the list" is in the worklist.Process() method, which can be defined as "delete this block" for the trash collector, or "pull this block" for the pull list manager, or whatever. It doesn't seem to me that worklist.Process() runs concurrently with worklist.listen(), so I don't see the value in having it run in its own goroutine.

Anyway: we'll hash this out more as we get into implementation.

There actually is! (At least some of the things I do have a reason!) pull_list is a package with its own namespace; volume is part of "package main" and is not. I'm not happy about the volume of code that's strewn around package "main" in keepstore and want to get cleaner boundaries between components as we go forward, so I intend to move more code (like volume.go) into their own packages as we go on.

Ah, I see. The (few) Go trees I've looked at seem to have a bunch of *.go files, each exporting a small number of names, which seems nice and pragmatic to me. If you export only one interface, can that also keep the namespace nice and clean, even if everything's in "main"?

Names aren't scoped to their source files; they're scoped to packages. Right now, all of the keepstore source files are scoped to "package main", so all of our functions and global variables are spread out across a single flat namespace. Defining types with methods and interfaces does help to organize the namespace a lot.

If this is confusing, I think we can keep the worklist stuff in package main for now the same way we do with the "volume" code. I do think we're going to want to break out the source into packages sooner or later, but that's probably not a decision we need to make right now.

Actions

Also available in: Atom PDF