Project

General

Profile

Actions

Bug #10028

closed

Workbench search is very slow (>60sec)

Added by Tom Morris about 8 years ago. Updated almost 8 years ago.

Status:
Resolved
Priority:
Normal
Assigned To:
Radhika Chippada
Category:
-
Target version:
-
Story points:
0.5

Files

links_query_explained.txt (18.2 KB) links_query_explained.txt Radhika Chippada, 10/25/2016 10:00 PM
links_query_explain analyze.txt (6.03 KB) links_query_explain analyze.txt Radhika Chippada, 10/26/2016 03:05 PM
full_text_search_indexes.sql (12.1 KB) full_text_search_indexes.sql Refactored indexes Javier Bértoli, 12/12/2016 10:26 PM

Subtasks 2 (0 open2 closed)

Task #10447: Review branch 10028-api-contents-perfResolvedRadhika Chippada11/02/2016Actions
Task #10768: Review branch 10028-functional-full-text-search-indexesResolvedRadhika Chippada12/16/2016Actions
Actions #1

Updated by Tom Morris about 8 years ago

  • Target version set to Arvados Future Sprints
  • Story points set to 0.5

Hypothesis: changing LIKE -type-_ to uuidprefix-type-_____ will greatly improve query performance.

If that hypothesis turns out to be false, we'll consider this story complete anyway and try a different strategy in another story.

Actions #2

Updated by Tom Morris about 8 years ago

  • Target version changed from Arvados Future Sprints to 2016-09-28 sprint
Actions #3

Updated by Radhika Chippada about 8 years ago

  • Assigned To set to Radhika Chippada
Actions #4

Updated by Radhika Chippada about 8 years ago

  • Status changed from New to In Progress
Actions #5

Updated by Radhika Chippada about 8 years ago

  • When I typed in workbench search box "*-4zz18-*" or "*-4zz18-" (collections), per API logs all tables were searched for "*-4zz18-*:*" or "*-4zz18-:*" and gave no results
    • In this case, I think we should replace *-4zz18 with UuidPrefix-4zz18, so that search succeeds
    • In addition to that replacement, should we only search in the collections table (?) or a subset of tables (see below)?
  • When I typed “UuidPrefix-4zz18”, I got results. However ALL tables were searched on the server
  • For any search string, ALL tables are being searched. Do we want to limit search to a select subset of tables such as just groups, users, collections, workflows, container_requests, pipeline templates and pipeline instances tables?
Actions #6

Updated by Tom Morris about 8 years ago

The proposed fix isn't for free text search. It's for "find me all jobs/users/whatever" on the local node. So, for users, it should be searching for 'qur1hi-tpzed-___' rather than '___-tpzed-___'.

Actions #7

Updated by Tom Clegg about 8 years ago

Something like this

--- a/services/api/app/models/arvados_model.rb
+++ b/services/api/app/models/arvados_model.rb
@@ -530,7 +530,7 @@ class ArvadosModel < ActiveRecord::Base
   end

   def self.uuid_like_pattern
-    "_____-#{uuid_prefix}-_______________" 
+    "#{Rails.configuration.uuid_prefix}-#{uuid_prefix}-_______________" 
   end

   def self.uuid_regex
Actions #8

Updated by Tom Morris about 8 years ago

#10079 is an example of the kind of thing I'm hoping this fix will improve. The pipeline instance referenced there takes over 3 1/2 minutes to display in workbench.

https://cloud.curoverse.com/pipeline_instances/qr1hi-d1hrv-hv8asomcxqggcff

Actions #9

Updated by Radhika Chippada about 8 years ago

  • With previous code, when I accessed a project, there were requests like this in the server log
   (1.5ms)  SELECT COUNT(DISTINCT "collections"."id") FROM "collections" WHERE "collections"."owner_uuid" = 'zzzzz-tpzed-q0pq7bl5wi0k683' AND (expires_at IS NULL or expires_at > statement_timestamp()) AND ((collections.uuid like '_____-7a9it-_______________' OR collections.uuid like '_____-j58dm-_______________' OR collections.uuid like '_____-q1cn2-_______________'))
  Human Load (1.2ms)  SELECT "humans".* FROM "humans" WHERE "humans"."owner_uuid" = 'zzzzz-tpzed-q0pq7bl5wi0k683' AND ((humans.uuid like '_____-7a9it-_______________' OR humans.uuid like '_____-j58dm-_______________' OR humans.uuid like '_____-q1cn2-_______________')) ORDER BY humans.modified_at desc, humans.uuid LIMIT 100 OFFSET 0
   (0.8ms)  SELECT COUNT(DISTINCT "humans"."id") FROM "humans" WHERE "humans"."owner_uuid" = 'zzzzz-tpzed-q0pq7bl5wi0k683' AND ((humans.uuid like '_____-7a9it-_______________' OR humans.uuid like '_____-j58dm-_______________' OR humans.uuid like '_____-q1cn2-_______________'))
  Specimen Load (0.8ms)  SELECT "specimens".* FROM "specimens" WHERE "specimens"."owner_uuid" = 'zzzzz-tpzed-q0pq7bl5wi0k683' AND ((specimens.uuid like '_____-7a9it-_______________' OR specimens.uuid like '_____-j58dm-_______________' OR specimens.uuid like '_____-q1cn2-_______________')) ORDER BY specimens.modified_at desc, specimens.uuid LIMIT 100 OFFSET 0
   (0.7ms)  SELECT COUNT(DISTINCT "specimens"."id") FROM "specimens" WHERE "specimens"."owner_uuid" = 'zzzzz-tpzed-q0pq7bl5wi0k683' AND ((specimens.uuid like '_____-7a9it-_______________' OR specimens.uuid like '_____-j58dm-_______________' OR specimens.uuid like '_____-q1cn2-_______________'))
  • When I accessed /pipeline_instances, /jobs, /container_requests, /containers, /groups, /users, /nodes, /humans etc
  ApiClientAuthorization Load (0.9ms)  SELECT "api_client_authorizations".* FROM "api_client_authorizations" WHERE (api_token IN ('f') AND
 (expires_at IS NULL OR expires_at > CURRENT_TIMESTAMP))
  Link Load (0.8ms)  SELECT links."head_uuid" FROM "links" WHERE ((links.tail_uuid = 'zzzzz-tpzed-q0pq7bl5wi0k683') AND (links.link_class = 'star') AND (links.head_uuid like '_____-j7d0g-_______________')) LIMIT 1000 OFFSET 0
   (0.8ms)  SELECT COUNT(DISTINCT "links"."id") FROM "links" WHERE ((links.tail_uuid = 'zzzzz-tpzed-q0pq7bl5wi0k683') AND (links.link_class = 'star') AND (links.head_uuid like '_____-j7d0g-_______________'))

All these were replaced with zzzzz-xxxxx- with new code at ac41324

  • Running all API tests took (several runs):
    • With old code: 343, 345, 337 seconds
    • With new code: 335, 344, 342 seconds
Actions #10

Updated by Tom Clegg about 8 years ago

We need to try some different postgres queries and index strategies, until we find an efficient way to implement this type filter.

Even inefficient filters are fast when the database is small, so the test times don't really tell us anything.

Postgres uses different strategies when the database is small, so "explain analyze" on the test database doesn't tell us much either.

Suggest using a dev database:
  • create 10K rows in the links table of a dev/test db
  • make sure "uuid like '_____-4zz18-_______________'" is slow (if not, add more rows and repeat)
  • try adding indexes and doing different queries with "explain analyze" until you find a variant that's fast (which probably means one that uses an index scan instead of a seq scan)

I did some testing on 4xphq and found that adding the site prefix didn't make a difference. The postgres docs hint that we need a special kind of index to support substring matches efficiently "when the database does not use the standard "C" locale": https://www.postgresql.org/docs/9.1/static/indexes-opclass.html

Actions #11

Updated by Tom Morris about 8 years ago

I've added a link to #9832 which contains some of the slow queries that Nico found. One from the jobs table in particular seems relevant to this exercise:

33 seconds searching for jobs containing a string(?)
32941 32814.1 POST index /arvados/v1/jobs {"api_token":"6743sv69zrkwfm3nd19wnow1t88y586ax35nzqwq2qd4huwb0t","reader_tokens":"[\"5vqmz9mik2ou2k9objb8pnyce8t97p6vocyaouqo3qalvpmjs5\"]","_method":"GET","order":"[\"created_at desc\"]","filters":"[[\"any\",\"@@\",\"cutadapt:*\"],[\"uuid\",\"is_a\",[\"arvados#job\"]]]","limit":"20","offset":"0"} null

The idea, as I understand it, is to combine the cluster_id-type-____ fix with an appropriate indexing strategy which allows it to be used effectively. Not sure whether the links table or jobs table is best to focus on, but using Links since Tom suggested it, that would mean adjusting the indexes for uuid, owner_uuid, tail_uuid, head_uuid so that they specify an operator class.

So, our current:

CREATE INDEX index_links_on_head_uuid ON links USING btree (head_uuid);
CREATE INDEX index_links_on_owner_uuid ON links USING btree (owner_uuid); 
CREATE INDEX index_links_on_tail_uuid ON links USING btree (tail_uuid);
CREATE UNIQUE INDEX index_links_on_uuid ON links USING btree (uuid);

becomes:

CREATE INDEX index_links_on_head_uuid ON links (head_uuid varchar_pattern_ops);
CREATE INDEX index_links_on_owner_uuid ON links (owner_uuid varchar_pattern_ops);
CREATE INDEX index_links_on_tail_uuid ON links (tail_uuid varchar_pattern_ops);
CREATE UNIQUE INDEX index_links_on_uuid ON links (uuid varchar_pattern_ops);

B-tree is the default index, so it doesn't need to be specified. The last one (primary key UUID) might be unnecessary depending on the query patterns.

Actions #12

Updated by Radhika Chippada about 8 years ago

  • Target version changed from 2016-09-28 sprint to 2016-10-12 sprint
Actions #13

Updated by Tom Morris about 8 years ago

  • Assigned To deleted (Radhika Chippada)
  • Target version changed from 2016-10-12 sprint to 2016-10-26 sprint
Actions #14

Updated by Tom Morris about 8 years ago

  • Assigned To set to Radhika Chippada
Actions #15

Updated by Radhika Chippada about 8 years ago

The performance test suite for qr1hi had been failing for a couple of months in jenkins.

I just pointed my development environment to qr1hi and ran the performance test and got this from server log. Needless to say, this is bad.

{"method":"POST","path":"/arvados/v1/groups/contents","format":"*/*","controller":"arvados/v1/groups","action":"contents","status":200,
"duration":112871.57,"view":0.46,"db":111970.88,"params":{"api_token":"xxx",
"reader_tokens":"[false]","current_request_id":"1476495345-031116462","_method":"GET","limit":"200","offset":"0","filters":"[[\"any\",\"@@\",\"hash:*\"]]",
"_profile":"true"},"@timestamp":"2016-10-15T01:57:07Z","@version":"1","message":"[200] POST /arvados/v1/groups/contents (arvados/v1/groups#contents)"}

Took almost 2 mins when I tried search manually as well. Steps: Click on search icon in dashboard; in the search dialog, enter "hash" in search box and wait until results show up.

Actions #16

Updated by Radhika Chippada about 8 years ago

Researching further into the long latency with the api -> groups_controller -> contents method, I found the following issue.

The contents method is executing the query for each of the following classes. It appends results from each query up to the desired number of responses (@limit) and sends the desired number of results to client. The issue here is the server is executing the query for each of the classes even after it fetches the desired number of results from the very first class. I verified this by searching for “zzzzz”, my site prefix and looking at the server logs. In order to determine the number of items_available, it is executing the query for each class even when no more results are needed. (that is, required 200 objects are already found from Group itself). This is extremely expensive, especially bad in case of full text search.

    klasses = [Group,
     Job, PipelineInstance, PipelineTemplate, ContainerRequest, Workflow,
     Collection,
     Human, Specimen, Trait]

Suggestions:

  • Instead of the number of items_available, I think we should just send “more items are available” boolean flag to client.
    • If need be we can fetch one extra than needed (201 instead of 200) and return only 200 to ensure we give the right information to client
  • There also seems to be another issue with this. Let’s say the Group and Job classes are done but PipelineInstance needs to be checked next. It appears that the second page loading (the second page request from client) would start over again and fetch the unnecessary first 2 pages worth of data. It seems like we need to remember the “cursor” position in “items available” more efficiently to avoid searching in the classes that are already completed.

Below are the statements executed for full text search (taken from my dev env). Here I searched for my site prefix to ensure the required 200 objects are found in Group itself.

  ^[[1m^[[36m (15.0ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "groups"."id") FROM "groups" WHERE "groups"."group_class" = 'project' AND ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(group_class,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mJob Load (1.0ms)^[[0m  SELECT "jobs".* FROM "jobs" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(submit_id,'') || ' ' || coalesce(script,'') || ' ' || coalesce(script_version,'') || ' ' || coalesce(script_parameters,'') || ' ' || coalesce(cancelled_by_client_uuid,'') || ' ' || coalesce(cancelled_by_user_uuid,'') || ' ' || coalesce(output,'') || ' ' || coalesce(is_locked_by_uuid,'') || ' ' || coalesce(log,'') || ' ' || coalesce(tasks_summary,'') || ' ' || coalesce(runtime_constraints,'') || ' ' || coalesce(repository,'') || ' ' || coalesce(supplied_script_version,'') || ' ' || coalesce(docker_image_locator,'') || ' ' || coalesce(description,'') || ' ' || coalesce(state,'') || ' ' || coalesce(arvados_sdk_version,'') || ' ' || coalesce(components,'') || ' ' || coalesce(script_parameters_digest,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY jobs.modified_at desc, jobs.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (0.8ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "jobs"."id") FROM "jobs" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(submit_id,'') || ' ' || coalesce(script,'') || ' ' || coalesce(script_version,'') || ' ' || coalesce(script_parameters,'') || ' ' || coalesce(cancelled_by_client_uuid,'') || ' ' || coalesce(cancelled_by_user_uuid,'') || ' ' || coalesce(output,'') || ' ' || coalesce(is_locked_by_uuid,'') || ' ' || coalesce(log,'') || ' ' || coalesce(tasks_summary,'') || ' ' || coalesce(runtime_constraints,'') || ' ' || coalesce(repository,'') || ' ' || coalesce(supplied_script_version,'') || ' ' || coalesce(docker_image_locator,'') || ' ' || coalesce(description,'') || ' ' || coalesce(state,'') || ' ' || coalesce(arvados_sdk_version,'') || ' ' || coalesce(components,'') || ' ' || coalesce(script_parameters_digest,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mPipelineInstance Load (0.7ms)^[[0m  SELECT "pipeline_instances".* FROM "pipeline_instances" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY pipeline_instances.modified_at desc, pipeline_instances.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (2.5ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mPipelineTemplate Load (1.3ms)^[[0m  SELECT "pipeline_templates".* FROM "pipeline_templates" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(description,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY pipeline_templates.modified_at desc, pipeline_templates.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (2.8ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "pipeline_templates"."id") FROM "pipeline_templates" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(description,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mContainerRequest Load (1.0ms)^[[0m  SELECT "container_requests".* FROM "container_requests" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(requesting_container_uuid,'') || ' ' || coalesce(container_uuid,'') || ' ' || coalesce(mounts,'') || ' ' || coalesce(runtime_constraints,'') || ' ' || coalesce(container_image,'') || ' ' || coalesce(environment,'') || ' ' || coalesce(cwd,'') || ' ' || coalesce(command,'') || ' ' || coalesce(output_path,'') || ' ' || coalesce(filters,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY container_requests.modified_at desc, container_requests.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (2.8ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "container_requests"."id") FROM "container_requests" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(requesting_container_uuid,'') || ' ' || coalesce(container_uuid,'') || ' ' || coalesce(mounts,'') || ' ' || coalesce(runtime_constraints,'') || ' ' || coalesce(container_image,'') || ' ' || coalesce(environment,'') || ' ' || coalesce(cwd,'') || ' ' || coalesce(command,'') || ' ' || coalesce(output_path,'') || ' ' || coalesce(filters,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mWorkflow Load (0.8ms)^[[0m  SELECT "workflows".* FROM "workflows" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(definition,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY workflows.modified_at desc, workflows.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (1.2ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "workflows"."id") FROM "workflows" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(definition,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mCollection Load (1.9ms)^[[0m  SELECT collections."uuid", collections."owner_uuid", collections."created_at", collections."modified_by_client_uuid", collections."modified_by_user_uuid", collections."modified_at", collections."name", collections."description", collections."properties", collections."portable_data_hash", collections."replication_desired", collections."replication_confirmed", collections."replication_confirmed_at", collections."expires_at" FROM "collections" WHERE (expires_at IS NULL or expires_at > statement_timestamp()) AND ((to_tsvector('english', ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(portable_data_hash,'') || ' ' || coalesce(uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(file_names,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY collections.modified_at desc, collections.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (1.4ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "collections"."id") FROM "collections" WHERE (expires_at IS NULL or expires_at > statement_timestamp()) AND ((to_tsvector('english', ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(portable_data_hash,'') || ' ' || coalesce(uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(file_names,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mHuman Load (1.1ms)^[[0m  SELECT "humans".* FROM "humans" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(properties,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY humans.modified_at desc, humans.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (0.8ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "humans"."id") FROM "humans" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(properties,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mSpecimen Load (0.7ms)^[[0m  SELECT "specimens".* FROM "specimens" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(material,'') || ' ' || coalesce(properties,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY specimens.modified_at desc, specimens.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (0.6ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "specimens"."id") FROM "specimens" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(material,'') || ' ' || coalesce(properties,'')) @@ to_tsquery('zzzzz:*')))^[[0m
  ^[[1m^[[35mTrait Load (0.7ms)^[[0m  SELECT "traits".* FROM "traits" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(properties,'')) @@ to_tsquery('zzzzz:*'))) ORDER BY traits.modified_at desc, traits.uuid LIMIT 0 OFFSET 0
  ^[[1m^[[36m (1.0ms)^[[0m  ^[[1mSELECT COUNT(DISTINCT "traits"."id") FROM "traits" WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(properties,'')) @@ to_tsquery('zzzzz:*')))^[[0m

Here a couple group contents requests from prod. As we can see they are expensive, especially the full text searches:

{"method":"POST","path":"/arvados/v1/groups/contents","format":"*/*","controller":"arvados/v1/groups","action":"contents","status":200,"duration":724.45,"view":0.5,"db":283.74,"params":{"api_token":"xxx","reader_tokens":"[false]","current_request_id":"1476495411-967384969","_method":"GET","limit":"200","offset":"0","filters":"[]","_profile":"true"},"@timestamp":"2016-10-15T01:56:20Z","@version":"1","message":"[200] POST /arvados/v1/groups/contents (arvados/v1/groups#contents)"}
{"method":"POST","path":"/arvados/v1/groups/contents","format":"*/*","controller":"arvados/v1/groups","action":"contents","status":200,"duration":112871.57,"view":0.46,"db":111970.88,"params":{"api_token":"xxx","reader_tokens":"[false]","current_request_id":"1476495345-031116462","_method":"GET","limit":"200","offset":"0","filters":"[[\"any\",\"@@\",\"hash:*\"]]","_profile":"true"},"@timestamp":"2016-10-15T01:57:07Z","@version":"1","message":"[200] POST /arvados/v1/groups/contents (arvados/v1/groups#contents)"}

Actions #17

Updated by Radhika Chippada about 8 years ago

  • File links_query_explained.txt added

I performed query analysis with and without the following index on links table. As can be seen in the attached "links_query_explained," there was some added benefit with the index up to 50,000 links in the table. (I saw no savings when the table had more than 50,000 links). Based on this, it seems like adding the varchar pattern indices on the links table can give us some performance improvement.

CREATE INDEX index_links_on_head_uuid ON links (head_uuid varchar_pattern_ops)

Actions #18

Updated by Radhika Chippada about 8 years ago

Brach: 10028-varchar-pattern-indexes

Based on the observation in note 17, it appears that adding varchar pattern indices on links table might give us some performance improvement.

Added these indices on head_uuid, tail_uuid and owner_uuid.

Actions #19

Updated by Tom Morris about 8 years ago

Thanks for the additional investigation.

re: varchar_ops on the index - were you able to quantify the benefit -- even if only roughly?

re: items_available count - almost certainly an overcommitment in terms of our API design. A "more available" flag like you suggest would be a much easier contract to satisfy.

Actions #20

Updated by Radhika Chippada about 8 years ago

Tom Morris said:

re: varchar_ops on the index - were you able to quantify the benefit -- even if only roughly

Yes, please see links_query_explained. Up to 50,000 links in my table, the new indices seem to give some 10-15% improvement. After that both are equal, which could be due to my small laptop or dummy data etc. It seems like worth a try

re: items_available count - almost certainly an overcommitment in terms of our API design. A "more available" flag like you suggest would be a much easier contract to satisfy.

Totally agree. We can rewrite workbench all we want and gain nothing towards curing the illness - not even the symptoms, unless we make our API server scalable.

Actions #21

Updated by Lucas Di Pentima about 8 years ago

I don't know if the cost unit scale is linear, but it seems to me that 8.40 is not a lot bigger than 8.24, I'm not sure if the improvement is worth the index.
OTOH, I'm new to PostgreSQL (have had some experience with MySQL) so I checked PSQL's documentation regarding matching operators:

https://www.postgresql.org/docs/9.1/static/functions-matching.html

Should the tests described on links_query_explained.txt file be repeated using '%' instead of '*'? I couldn't find a mention about using asterisk with LIKE.

Actions #22

Updated by Radhika Chippada about 8 years ago

  • File deleted (links_query_explained.txt)
Actions #23

Updated by Radhika Chippada about 8 years ago

  • File links_query_explained.txt added

Please see this latest "links_query_explained" with "like" queries. With the varchar index and site_prefix-uuid_prefix, the queries are much faster than those without the index or without the site_prefix.

Actions #24

Updated by Tom Morris about 8 years ago

Radhika Chippada wrote:

Please see this latest "links_query_explained" with "like" queries. With the varchar index and site_prefix-uuid_prefix, the queries are much faster than those without the index or without the site_prefix.

Where "much faster" = 5X faster or more. That's a very significant improvement and I'm very glad that we were finally able to prove this out.

I'm a little suspicious that the numbers are identical for 3 of the 4 cases indicating that they're probably query planner estimates, not actual elapsed times for the queries, but the results are encouraging enough that I don't really care.

Everyone likes to look at this stuff differently, but here's how my mind works:

Old Index Old Index New Index New Index
Rows _-j7d0g-% zzzzz-j7d0g-% _-j7d0g-% zzzzz-j7d0g-% Improvement
1000 33.51 33.51 33.51 23.84 1.41
2000 51.06 51.06 51.06 29.2 1.75
5000 129.51 129.51 129.51 37.31 3.47
10000 248.01 248.01 248.01 43.87 5.65
20000 484.68 484.68 484.68 47.42 10.22
30000 717.81 717.81 726.01 49.05 14.63
50000 1203.01 1203.01 1203.01 54.04 22.26
160000 3811.89 3811.89 3828.01 113.44 33.6
250000 5985.29 5985.29 5985.29 46.52 128.66

My focus is on the bottom right corner of the table (and I think 10K rows is the lower end of the interesting range).

I'd love to see this get rolled out to the other relevant indexes ASAP. I've created #10354 to cover that piece of the work.

Actions #25

Updated by Radhika Chippada about 8 years ago

Actions #26

Updated by Radhika Chippada about 8 years ago

  • File deleted (links_query_explained.txt)
Actions #27

Updated by Radhika Chippada about 8 years ago

I added a newer version of the attachment with up to 250,000 rows, which should help see the impact even better. The last entry for 250,000 is better than for 160,000. I guess with 250,000 rows some postgres "large" database stuff got kicked in like Tom was saying earlier. In any case, I agree this seems very encouraging and we should deploy and see how it fares in qr1hi.

Actions #28

Updated by Tom Clegg about 8 years ago

Columns to consider for new indexes:
  • logs.object_uuid and logs.owner_uuid (websocket queries)
  • (any others?)

Also, re note-16, if there's an "is_a" filter in play in the groups#contents "klass" loop, we can either leave that filter out or skip the klass, depending on whether klass is one of the requested types. We don't need a "like" query to accomplish "get all workflows that have type=workflow" or "get all workflows that have type=job"...!

Actions #29

Updated by Tom Morris about 8 years ago

Thanks for the additional data points! The 160K row is an anomaly for some reason, perhaps due to a bucket split causing the target key to be in two separate buckets or some such. Other than the query times seem to cluster around a constant time of 45-55 msec which is kind of what I expect.

Radhika - any feedback on query plan estimation vs actual query time question? I'm still suspicious that these are plan times, not actual times. They should track if the query planner has good statistics, but they don't always.

Actions #30

Updated by Lucas Di Pentima about 8 years ago

Tom Morris wrote:

Radhika - any feedback on query plan estimation vs actual query time question? I'm still suspicious that these are plan times, not actual times. They should track if the query planner has good statistics, but they don't always.

From what I remember from Postgresql's docs, the cost unit is not necessarily time, I think it's a function of disk page reads and some CPU usage multiplier.

Actions #31

Updated by Radhika Chippada about 8 years ago

  • File links_query_explain analyze.txt added

I am not seeing any measurable improvement in my local db with "explain analyze." Please see "links_query_explain_analyze" for details.

Actions #32

Updated by Radhika Chippada about 8 years ago

Tom Clegg said:

Columns to consider for new indexes: logs.object_uuid and logs.owner_uuid (websocket queries)

Added one more migration script for logs table at cde9231

Actions #33

Updated by Radhika Chippada about 8 years ago

  • File deleted (links_query_explain analyze.txt)
Actions #35

Updated by Tom Morris about 8 years ago

It's never simple, is it? :)

The key difference is that it's gone back to doing sequential scans instead of using the index. The question is why. Here are the two 100K row, prefix search cases. The first is EXPLAIN ANALYZE and the second is EXPLAIN.

arvados_dev=> explain analyze select * from links where head_uuid like 'zzzzz-j7d0g-%';
                                                   QUERY PLAN                                                   
----------------------------------------------------------------------------------------------------------------
 Seq Scan on links  (cost=0.00..2388.40 rows=100002 width=230) (actual time=0.014..317.493 rows=100014 loops=1)
   Filter: ((head_uuid)::text ~~ 'zzzzz-j7d0g-%'::text)
 Total runtime: 592.811 ms
(3 rows)

versus

arvados_dev=> explain select * from links where head_uuid like 'zzzzz-j7d0g-%';
                                                      QUERY PLAN                                                      
----------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on links  (cost=4.60..113.44 rows=30 width=225)
   Filter: ((head_uuid)::text ~~ 'zzzzz-j7d0g-%'::text)
   ->  Bitmap Index Scan on links_varchar_index_on_head_uuid  (cost=0.00..4.59 rows=30 width=0)
         Index Cond: (((head_uuid)::text ~>=~ 'zzzzz-j7d0g-'::text) AND ((head_uuid)::text ~<~ 'zzzzz-j7d0g.'::text))
(4 rows)

The first thing that jumps out at me is that they're returning different numbers of rows. Do we know why? Has the test data changed?

If there's nothing obvious like we're accidentally using the wrong database/data/whatever, the next thing I'd check is to make sure that we've got up-to-date statistics with from ANALYZE
https://wiki.postgresql.org/wiki/Introduction_to_VACUUM,_ANALYZE,_EXPLAIN,_and_COUNT
https://www.postgresql.org/docs/current/static/populate.html#POPULATE-ANALYZE

If that doesn't help, I think the next approach would be to play with some of the query planner hints to see if we can figure out what we need to do to "encourage" it to use the indexes:
https://www.postgresql.org/docs/current/static/runtime-config-query.html

I feel we're close...
If that doesn't help,

Actions #36

Updated by Radhika Chippada about 8 years ago

  • Target version changed from 2016-10-26 sprint to 2016-11-09 sprint
Actions #37

Updated by Tom Clegg about 8 years ago

I tried some queries on test servers and found
  • counting lots of rows in a huge table is slow, regardless of how great the index is: in order to count accurately, postgresql has to visit every counted row
  • given this, postgresql might be outsmarting us when it does a seq scan in our test trials. It knows most of the rows match the query, which means it will have to visit them anyway when it does a count -- so there is not much benefit in consulting the index first.
Typically, the huge table that causes trouble is "logs". For example:
  • arv-mount: subscribe to all events
           self.events = arvados.events.subscribe(self._api_client,
                                     [["event_type", "in", ["create", "update", "delete"]]],
                                     self.on_event)
    
  • Python SDK:
    items = self.api.logs().list(limit=1, order="id desc", filters=f).execute()['items']
  • API server:
         list[:items_available] = @objects.
            except(:limit).except(:offset).
            count(:id, distinct: true)
    
  • Postgresql: Slowly count all rows of the giant logs table with event_type ∈ {"create", "update", "delete"}.

Nobody here actually cares about items_available.

All PollClient needs is max(id) from the logs table, and only for the purpose of passing it to the next logs().list() call.

Perhaps it could pass a filter like [["created_at",">=",subscription_time]] (instead of doing this expensive API call to find the last known ID and then passing [["logs.id",">",str(self.id)]]) ...?

Actions #38

Updated by Radhika Chippada about 8 years ago

Tom said in https://dev.arvados.org/issues/10028#note-28

Also, re note-16, if there's an "is_a" filter in play in the groups#contents "klass" loop, we can either leave that filter out or skip the klass, depending on whether klass is one of the requested types. We don't need a "like" query to accomplish "get all workflows that have type=workflow" or "get all workflows that have type=job"...!

Added branch 10028-api-contents-perf with this update to skip the klass.

Actions #39

Updated by Lucas Di Pentima about 8 years ago

I think that instead of calling a capitalize on the first char of every wanted_klass, it might be better to downcase both wanted_klasses and every klass.to_s at line 90, because I think PipelineInstance, PipelineTemplate and ContainterRequest would not match.

Other than that, looks good to me.

Actions #40

Updated by Radhika Chippada about 8 years ago

Lucas said:

I think that instead of calling a capitalize on the first char of every wanted_klass, it might be better to downcase both wanted_klasses and every klass.to_s at line 90, because I think PipelineInstance, PipelineTemplate and ContainterRequest would not match.

Since the acceptable filters are only "arvados#pipelineInstance" etc and not "arvados#pipelineinstance" this is not a problem and hence merging. Doing at line 90, of course violates the performance suggestion that "don't do something repeatedly in a loop, if you can do it once outside the loop." Thanks Lucas.

Actions #41

Updated by Nico César almost 8 years ago

this is the log

{"method":"GET","path":"/search/choose","format":"json","controller":"search","action":"choose","status":200,"duration":115835.18,"view":5.36,"db":0.0,"params":{"current_request_id":"1478620007-269842909","partial":"true","filters":"[[\"any\",\"@@\",\"hash:*\"]]"},"@timestamp":"2016-11-08T15:48:42.920Z","@version":"1","message":"[200] GET /search/choose (search#choose)"}

this is the query:

SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')))

this is the analysis:


 Aggregate  (cost=3328.29..3328.30 rows=1 width=4) (actual time=76854.540..76854.541 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..3292.41 rows=14352 width=4) (actual time=1.243..76824.000 rows=13934 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         Rows Removed by Filter: 830
 Total runtime: 76854.599 ms
(5 rows)

same query without the star "note the missing *" only 14ms

 Aggregate  (cost=2829.75..2829.76 rows=1 width=4) (actual time=14.644..14.645 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances  (cost=206.95..2817.21 rows=5018 width=4) (actual time=2.291..9.155 rows=5019 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash'::text))
         ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..205.69 rows=5018 width=0) (actual time=1.947..1.947 rows=5408 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash'::text))
 Total runtime: 14.714 ms
(6 rows)

Actions #42

Updated by Radhika Chippada almost 8 years ago

Regarding Tom Morris' question about sequential scan versus bitmap index scan, I found that I can disable seq scan using "SET enable_seqscan = off;" Tried with it and I am getting same times with either scan. (The links table has the varchar_pattern_ops index)

arvados_dev=> explain analyze select * from links where head_uuid like 'zzzzz-j7d0g-%';
                                                   QUERY PLAN                                                   
----------------------------------------------------------------------------------------------------------------
 Seq Scan on links  (cost=0.00..2388.40 rows=100002 width=230) (actual time=0.009..228.541 rows=100014 loops=1)
   Filter: ((head_uuid)::text ~~ 'zzzzz-j7d0g-%'::text)
 Total runtime: 423.306 ms
(3 rows)

arvados_dev=> SET enable_seqscan = off;
SET

arvados_dev=> explain analyze select * from links where head_uuid like 'zzzzz-j7d0g-%';
                                                                       QUERY PLAN                                                                       
-----------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on links  (cost=3465.29..5853.32 rows=100002 width=230) (actual time=10.220..235.048 rows=100014 loops=1)
   Filter: ((head_uuid)::text ~~ 'zzzzz-j7d0g-%'::text)
   ->  Bitmap Index Scan on links_varchar_index_on_head_uuid  (cost=0.00..3440.29 rows=100002 width=0) (actual time=10.049..10.049 rows=100014 loops=1)
         Index Cond: (((head_uuid)::text ~>=~ 'zzzzz-j7d0g-'::text) AND ((head_uuid)::text ~<~ 'zzzzz-j7d0g.'::text))
 Total runtime: 440.064 ms
(5 rows)
Actions #43

Updated by Radhika Chippada almost 8 years ago

Based on note 42, I think there is no point in investing more time on varchar_pattern_ops indexes. Two ideas I can think of to improve full-text searching:

1. As noted in note 16, find alternative ways to provide items_available to client rather than going though all the project klasses. Once the required "limit" number of rows are fetched, return "items_available = 1" if there are more klasses to be searched or if there are additional rows in the current working klass.

2. Explore a search solution such as Lucene.

Actions #44

Updated by Radhika Chippada almost 8 years ago

  • Target version changed from 2016-11-09 sprint to 2016-11-23 sprint
Actions #45

Updated by Radhika Chippada almost 8 years ago

  • Target version deleted (2016-11-23 sprint)
Actions #46

Updated by Radhika Chippada almost 8 years ago

It appears that "SET enable_seqscan=off" does make significant improvement for full text search by applying the index (this is from my local db):

  • Without “SET enable_seqscan = off”
    arvados_dev=> explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
    
     Aggregate  (cost=10.70..10.71 rows=1 width=4) (actual time=2.736..2.739 rows=1 loops=1)
       ->  Seq Scan on pipeline_instances  (cost=0.00..10.70 rows=1 width=4) (actual time=0.333..2.641 rows=11 loops=1)
             Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
     Total runtime: 2.795 ms
    (4 rows)
    
  • With “SET enable_seqscan = off”
    arvados_dev=> SET enable_seqscan = off;
    SET
    arvados_dev=> explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
    
     Aggregate  (cost=28.13..28.14 rows=1 width=4) (actual time=0.130..0.132 rows=1 loops=1)
       ->  Bitmap Heap Scan on pipeline_instances  (cost=24.06..28.13 rows=1 width=4) (actual time=0.065..0.089 rows=11 loops=1)
             Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
             ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..24.06 rows=1 width=0) (actual time=0.057..0.057 rows=14 loops=1)
                   Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
     Total runtime: 0.179 ms
    (6 rows)
    
Actions #48

Updated by Radhika Chippada almost 8 years ago

http://stackoverflow.com/questions/28173940/correct-postgres-full-text-search-indexes

We need to figure out how to configure this on our databases?

Actions #49

Updated by Tom Morris almost 8 years ago

Encouraging results! In addition to what Ward said about figuring out how to convince the query planner to do the right thing without the workaround, we also need to make sure that we test things in both orders so that we rule out any cache effects where one query is avoiding doing I/O because the buffer cache is pre-populated with the data that it needs. It looks like both Ward & Radhika did their tests with the slow case first, so we should try with the fast case first to make sure it's not getting a free benefit.

The query planner needs accurate stats to be able to work effectively, so one thing we should make sure we do is run ANALYZE to update the stats. Ward's example shows that the estimated cost for the "fast" case is higher, so the query planner is doing what it thinks is the right thing by picking the lower cost plan. The problem is that the estimated cost is very different from the actual cost.

Actions #50

Updated by Ward Vandewege almost 8 years ago

Tom Morris wrote:

Encouraging results! In addition to what Ward said about figuring out how to convince the query planner to do the right thing without the workaround, we also need to make sure that we test things in both orders so that we rule out any cache effects where one query is avoiding doing I/O because the buffer cache is pre-populated with the data that it needs. It looks like both Ward & Radhika did their tests with the slow case first, so we should try with the fast case first to make sure it's not getting a free benefit.

I did it in both orders this morning and just verified it again, the benefit is real, this is not caused by caching. This is a very clear case of the query planner getting it wrong, imo.

The query planner needs accurate stats to be able to work effectively, so one thing we should make sure we do is run ANALYZE to update the stats. Ward's example shows that the estimated cost for the "fast" case is higher, so the query planner is doing what it thinks is the right thing by picking the lower cost plan. The problem is that the estimated cost is very different from the actual cost.

ANALYZE doesn't make any difference, tested on su92l:

arvados_production=# analyze pipeline_instances ;
ANALYZE

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
                                                                                                                                                                                                                 QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=654.17..654.18 rows=1 width=4) (actual time=17828.665..17828.665 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..646.54 rows=3050 width=4) (actual time=0.354..17825.380 rows=3045 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         Rows Removed by Filter: 77
 Total runtime: 17828.704 ms
(5 rows)

arvados_production=# set enable_seqscan = off;
SET
arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
                                                                                                                                                                                                                             QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                                  
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=852.82..852.83 rows=1 width=4) (actual time=4.721..4.721 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances  (cost=203.70..845.20 rows=3050 width=4) (actual time=1.059..2.959 rows=3045 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..202.93 rows=3050 width=0) (actual time=1.018..1.018 rows=3354 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 4.792 ms
(6 rows)
Actions #51

Updated by Tom Morris almost 8 years ago

Sounds like increasing SEQ_PAGE_COST may be one way to get the planner to weight the strategies differently.

http://www.postgresonline.com/journal/archives/149-Forcing-the-planners-hand-with-set-enable_seqscan-off-WTF.html

Note that although this example increased it from 1 to 2, it's actually a floating point value

Actions #52

Updated by Radhika Chippada almost 8 years ago

Since “enable_seqscan” can only be set per query, and since forcing index usage appears to be promising, we need to figure out how to get there. A few ideas I found:

1. According to http://www.postgresonline.com/journal/archives/149-Forcing-the-planners-hand-with-set-enable_seqscan-off-WTF.html , they updated their code to set enable_seqscan to off.
I think we should try this as a last resort (after trying 2 below and any other options we come across)

set enable_seqscan = off;
SELECT item_name, geom
    FROM items …

2. According to https://www.postgresql.org/docs/9.2/static/runtime-config-query.html , reducing the value of random_page_cost relative to seq_page_cost will make the system prefer index scans.
I tried it on my db and see the index scan being used. Can we try the following steps in 4xphq and then on qr1hi to see if this does the trick?

  • Run the following and note the values. The defaults are 1 and 4 per documentation. But, doesn’t hurt to verify.
arvados_dev=> show seq_page_cost;
 seq_page_cost 
---------------
 1
(1 row)

arvados_dev=> show random_page_cost;
 random_page_cost 
------------------
 4
(1 row)
  • Set seq_page_cost to a value higher than random_page_cost (I used 5 where the random_page_cost is 4)
arvados_dev=> set seq_page_cost = 5;
SET
  • Run the following query and verify that index scan is used
explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
  • Did you see the index scan being used? If so, I think we can configure postgres configuration like this.
  • On my db:
    arvados_dev=> set seq_page_cost = 1;
    SET
    arvados_dev=> explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
     Aggregate  (cost=10.70..10.71 rows=1 width=4) (actual time=3.212..3.215 rows=1 loops=1)
       ->  Seq Scan on pipeline_instances  (cost=0.00..10.70 rows=1 width=4) (actual time=0.351..3.116 rows=11 loops=1)
             Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
     Total runtime: 3.277 ms
    (4 rows)
    
    arvados_dev=> set seq_page_cost = 5;
    SET
    arvados_dev=> explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
     Aggregate  (cost=28.13..28.14 rows=1 width=4) (actual time=0.181..0.183 rows=1 loops=1)
       ->  Bitmap Heap Scan on pipeline_instances  (cost=24.06..28.13 rows=1 width=4) (actual time=0.096..0.130 rows=11 loops=1)
             Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
             ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..24.06 rows=1 width=0) (actual time=0.084..0.084 rows=14 loops=1)
                   Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
     Total runtime: 0.245 ms
    (6 rows)
    
    
Actions #53

Updated by Ward Vandewege almost 8 years ago

On 9tee4:


arvados_production=# analyze verbose pipeline_instances;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 70 of 70 pages, containing 737 live rows and 87 dead rows; 737 rows in sample, 737 estimated total rows
ANALYZE
arvados_production=# show seq_page_cost ;
 seq_page_cost 
---------------
 1
(1 row)

arvados_production=# show random_page_cost ;
 random_page_cost 
------------------
 4
(1 row)

arvados_production=# show enable_seqscan ;
 enable_seqscan 
----------------
 on
(1 row)

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                               QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=123.39..123.40 rows=1 width=4) (actual time=453.510..453.511 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..121.59 rows=721 width=4) (actual time=7.092..452.504 rows=718 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 453.565 ms
(4 rows)

arvados_production=# vacuum analyze pipeline_instances;
VACUUM
arvados_production=# ANALYZE verbose pipeline_instances;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 70 of 70 pages, containing 737 live rows and 0 dead rows; 737 rows in sample, 737 estimated total rows
ANALYZE
arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                               QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=123.39..123.40 rows=1 width=4) (actual time=451.361..451.362 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..121.59 rows=721 width=4) (actual time=7.165..450.407 rows=718 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 451.404 ms
(4 rows)

arvados_production=# set default_statistics_target =10000;
SET
arvados_production=# show default_statistics_target ;
 default_statistics_target 
---------------------------
 10000
(1 row)

arvados_production=# ANALYZE verbose pipeline_instances;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 70 of 70 pages, containing 737 live rows and 0 dead rows; 737 rows in sample, 737 estimated total rows
ANALYZE
arvados_production=# 
arvados_production=# 
arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                                                                                                                                                                                                                                                                                                                                                                                                                             QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=123.39..123.40 rows=1 width=4) (actual time=453.652..453.653 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..121.59 rows=721 width=4) (actual time=7.229..452.703 rows=718 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 453.697 ms
(4 rows)

arvados_production=# vacuum analyze pipeline_instances;
VACUUM
arvados_production=# ANALYZE verbose pipeline_instances;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 70 of 70 pages, containing 737 live rows and 0 dead rows; 737 rows in sample, 737 estimated total rows
ANALYZE
arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                               QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=123.39..123.40 rows=1 width=4) (actual time=457.745..457.746 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..121.59 rows=721 width=4) (actual time=7.206..456.752 rows=718 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 457.792 ms
(4 rows)

So that doesn't help. This is postgres 9.1 on 9tee4.

But on 4xphq, with postgres 9.4:

postgres@4xphq:~$ psql arvados_production
psql (9.4.9)
Type "help" for help.

arvados_production=# show default_statistics_target ;
 default_statistics_target 
---------------------------
 100
(1 row)

arvados_production=# set default_statistics_target =10000;
SET
arvados_production=# show default_statistics_target ;
 default_statistics_target 
---------------------------
 10000
(1 row)

arvados_production=# analyze pipeline_instances;
ANALYZE
arvados_production=# analyze verbose pipeline_instances;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 953 of 953 pages, containing 7465 live rows and 1374 dead rows; 7465 rows in sample, 7465 estimated total rows
ANALYZE
arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
arvados_production=# vacuum analyze pipeline_instances;
VACUUM
arvados_production=# analyze verbose pipeline_instances;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 953 of 953 pages, containing 7465 live rows and 0 dead rows; 7465 rows in sample, 7465 estimated total rows
ANALYZE
arvados_production=# show default_statistics_target ;
 default_statistics_target 
---------------------------
 10000
(1 row)

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
arvados_production=# \q
postgres@4xphq:~$ psql arvados_production
psql (9.4.9)
Type "help" for help.

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
arvados_production=# \q
postgres@4xphq:~$ psql arvados_production
psql (9.4.9)
Type "help" for help.

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
arvados_production=# analyze verbose pipeline_instances;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 953 of 953 pages, containing 7465 live rows and 0 dead rows; 7465 rows in sample, 7465 estimated total rows

ANALYZE
arvados_production=# 
arvados_production=# 
arvados_production=# 
arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                                                                                                     QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                                  
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1437.43..1437.44 rows=1 width=4) (actual time=2.932..2.932 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances  (cost=75.74..1423.33 rows=5637 width=4) (actual time=0.920..2.019 rows=2749 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         Heap Blocks: exact=386
         ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..74.34 rows=5637 width=0) (actual time=0.873..0.873 rows=2749 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Planning time: 1.251 ms
 Execution time: 3.013 ms
(8 rows)

(END)

On c97qk, with postgres 9.3:


postgres@c97qk:~$ psql arvados_production;
psql (9.3.11)
Type "help" for help.

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                                                                                                QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=2000.88..2000.89 rows=1 width=4) (actual time=4485.929..4485.930 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..1993.59 rows=2917 width=4) (actual time=2.674..4483.521 rows=2867 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         Rows Removed by Filter: 9279
 Total runtime: 4486.008 ms
(5 rows)

arvados_production=# set enable_seqscan =off;
SET

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                                                                                                                                                                                                                                                                                                                                                                                                                                  QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                                  
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=2157.15..2157.16 rows=1 width=4) (actual time=5.572..5.572 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances  (cost=570.67..2149.86 rows=2917 width=4) (actual time=1.933..3.946 rows=2867 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..569.94 rows=2917 width=0) (actual time=1.873..1.873 rows=3201 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 5.631 ms
(6 rows)

arvados_production=# set enable_seqscan =on;
SET
arvados_production=# show enable_seqscan ;
 enable_seqscan 
----------------
 on
(1 row)

arvados_production=# show seq_page_cost ;
 seq_page_cost 
---------------
 1
(1 row)

arvados_production=# show random_page_cost ;
 random_page_cost 
------------------
 4
(1 row)
arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));

                                                                                                     QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                                  
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1669.15..1669.16 rows=1 width=4) (actual time=4.380..4.380 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances  (cost=82.67..1661.86 rows=2917 width=4) (actual time=0.877..2.745 rows=2868 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..81.94 rows=2917 width=0) (actual time=0.822..0.822 rows=3214 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 4.431 ms
(6 rows)

arvados_production=# analyze verbose pipeline_instances ;
INFO:  analyzing "public.pipeline_instances" 
INFO:  "pipeline_instances": scanned 1375 of 1375 pages, containing 12151 live rows and 677 dead rows; 12151 rows in sample, 12151 estimated total rows
ANALYZE

arvados_production=# explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id") FROM "pipeline_instances"  WHERE ((to_tsvector('english', ' ' || coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')) @@ to_tsquery('hash:*')));
                                                                                                     QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                                  
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1832.30..1832.31 rows=1 width=4) (actual time=4.525..4.525 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances  (cost=167.08..1822.29 rows=4003 width=4) (actual time=0.959..2.876 rows=2868 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx  (cost=0.00..166.08 rows=4003 width=0) (actual time=0.882..0.882 rows=3252 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Total runtime: 4.576 ms
(6 rows)

Actions #54

Updated by Radhika Chippada almost 8 years ago

Can you please collect this data regarding su92l and qr1hi as well? (On su92l, full text search takes about 30 sec, and on qr1hi takes about 2 mins)

postgres version, whether or not index scan is used with any of the combination of these parameters

Thanks.

Actions #55

Updated by Javier Bértoli almost 8 years ago

Running tests on database qr1hi_arvados_production on host tb05z:

The ORIGINAL QUERY Radhika passed me

EXPLAIN ANALYZE SELECT COUNT(DISTINCT "pipeline_instances"."id") 
  FROM "pipeline_instances"  
  WHERE (
    (to_tsvector('english', 
     ' ' || 
     coalesce(uuid,'')                    || ' ' ||
     coalesce(owner_uuid,'')              || ' ' ||
     coalesce(modified_by_client_uuid,'') || ' ' || 
     coalesce(modified_by_user_uuid,'')   || ' ' ||
     coalesce(pipeline_template_uuid,'')  || ' ' ||
     coalesce(name,'')                    || ' ' ||
     coalesce(components,'')              || ' ' ||
     coalesce(properties,'')              || ' ' ||
     coalesce(state,'')                   || ' ' ||
     coalesce(components_summary,'')      || ' ' ||
     coalesce(description,'')) @@ to_tsquery('hash:*')
    )
  );

retuns

Aggregate  (cost=3257.15..3257.16 rows=1 width=4) (actual time=64546.727..64546.728 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..3220.55 rows=14640 width=4) (actual time=1.091..64518.068 rows=14131 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((' '::text || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         Rows Removed by Filter: 1034
 Planning time: 0.388 ms
 Execution time: 64546.794 ms

And actually running the query it returns

 count 
-------
 14131
(1 row)

I added the following index:

CREATE INDEX javier_pipeline_instances_full_text_search_idx 
  ON pipeline_instances 
  USING gin (to_tsvector( 'english'::regconfig,
             COALESCE(uuid,                    ''::character varying)::text || ' '::text ||
             COALESCE(owner_uuid,              ''::character varying)::text || ' '::text ||
             COALESCE(modified_by_client_uuid, ''::character varying)::text || ' '::text ||
             COALESCE(modified_by_user_uuid,   ''::character varying)::text || ' '::text ||
             COALESCE(pipeline_template_uuid,  ''::character varying)::text || ' '::text ||
             COALESCE(name,                    ''::character varying)::text || ' '::text ||
             COALESCE(components,              ''::character varying)::text || ' '::text ||
             COALESCE(properties,              ''::character varying)::text || ' '::text ||
             COALESCE(state,                   ''::character varying)::text || ' '::text ||
             COALESCE(components_summary,      ''::character varying)::text || ' '::text ||
             COALESCE(description,             ''::character varying)::text)
            )

And modified the query removing the initial whitespace:

SELECT COUNT(DISTINCT "pipeline_instances"."id") 
  FROM "pipeline_instances"  
  WHERE (
          (to_tsvector('english', 
           coalesce(uuid,'')                    || ' ' ||
           coalesce(owner_uuid,'')              || ' ' ||
           coalesce(modified_by_client_uuid,'') || ' ' || 
           coalesce(modified_by_user_uuid,'')   || ' ' ||
           coalesce(pipeline_template_uuid,'')  || ' ' ||
           coalesce(name,'')                    || ' ' ||
           coalesce(components,'')              || ' ' ||
           coalesce(properties,'')              || ' ' ||
           coalesce(state,'')                   || ' ' ||
           coalesce(components_summary,'')      || ' ' ||
           coalesce(description,'')) @@ to_tsquery('hash:*')
  )       
);      

it returns (almost immediatly)

Aggregate  (cost=874.61..874.62 rows=1 width=4) (actual time=38.157..38.158 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances  (cost=26.41..873.85 rows=303 width=4) (actual time=5.380..22.347 rows=14131 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
         Heap Blocks: exact=2157
         ->  Bitmap Index Scan on javier_pipeline_instances_full_text_search_idx  (cost=0.00..26.33 rows=303 width=0) (actual time=4.989..4.989 rows=14131 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text)) @@ to_tsquery('hash:*'::text))
 Planning time: 0.243 ms
 Execution time: 38.237 ms

Notice it now USES THE NEW INDEX, with no modification on the costs or any other modification. When actually running the query, the number of results is the same.

Actions #56

Updated by Javier Bértoli almost 8 years ago

Running tests on database qr1hi_arvados_production on host tb05z.

Refactored all the FULL SEARCH indexes, removing the leading ' '::text in all indexes (file withe the indexes attached) and saved them as refactored_\*. Also, some indexes didn't match the WHERE clause used in the full text search, which also makes the planner decide not to use those indexes.

Now, testing in psql with \\timing:

  • Old query:
SELECT "jobs".* 
  FROM "jobs" 
  WHERE (to_tsvector('english', ' ' ||  
         coalesce(uuid,'')                     || ' ' ||
         coalesce(owner_uuid,'')               || ' ' ||
         coalesce(modified_by_client_uuid,'')  || ' ' ||
         coalesce(modified_by_user_uuid,'')    || ' ' ||
         coalesce(submit_id,'')                || ' ' ||
         coalesce(script,'')                   || ' ' ||
         coalesce(script_version,'')           || ' ' ||
         coalesce(script_parameters,'')        || ' ' ||
         coalesce(cancelled_by_client_uuid,'') || ' ' || 
         coalesce(cancelled_by_user_uuid,'')   || ' ' || 
         coalesce(output,'')                   || ' ' || 
         coalesce(is_locked_by_uuid,'')        || ' ' || 
         coalesce(log,'')                      || ' ' || 
         coalesce(tasks_summary,'')            || ' ' || 
         coalesce(runtime_constraints,'')      || ' ' || 
         coalesce(repository,'')               || ' ' || 
         coalesce(supplied_script_version,'')  || ' ' || 
         coalesce(docker_image_locator,'')     || ' ' || 
         coalesce(description,'')              || ' ' || 
         coalesce(state,'')                    || ' ' || 
         coalesce(arvados_sdk_version,'')      || ' ' || 
         coalesce(components,'')               || ' ' || 
         coalesce(script_parameters_digest,'')
        ) @@ to_tsquery('HWI:*')) ORDER BY jobs.modified_at desc, jobs.uuid LIMIT 200 OFFSET 0;

Time: 16395.353 ms

Using the new query and index:

SELECT "jobs".* 
  FROM "jobs" 
  WHERE (to_tsvector('english',         
         coalesce(uuid,'')                     || ' ' ||
         coalesce(owner_uuid,'')               || ' ' ||
         coalesce(modified_by_client_uuid,'')  || ' ' ||
         coalesce(modified_by_user_uuid,'')    || ' ' ||
         coalesce(submit_id,'')                || ' ' ||
         coalesce(script,'')                   || ' ' ||
         coalesce(script_version,'')           || ' ' ||
         coalesce(script_parameters,'')        || ' ' ||
         coalesce(cancelled_by_client_uuid,'') || ' ' || 
         coalesce(cancelled_by_user_uuid,'')   || ' ' || 
         coalesce(output,'')                   || ' ' || 
         coalesce(is_locked_by_uuid,'')        || ' ' || 
         coalesce(log,'')                      || ' ' || 
         coalesce(tasks_summary,'')            || ' ' || 
         coalesce(runtime_constraints,'')      || ' ' || 
         coalesce(repository,'')               || ' ' || 
         coalesce(supplied_script_version,'')  || ' ' || 
         coalesce(docker_image_locator,'')     || ' ' || 
         coalesce(description,'')              || ' ' || 
         coalesce(state,'')                    || ' ' || 
         coalesce(arvados_sdk_version,'')      || ' ' || 
         coalesce(components,'')               || ' ' || 
         coalesce(script_parameters_digest,'')
        ) @@ to_tsquery('HWI:*')) ORDER BY jobs.modified_at desc, jobs.uuid LIMIT 200 OFFSET 0;

Time: 4.197 ms
Actions #57

Updated by Radhika Chippada almost 8 years ago

The leading space in full-text search indexes was added in #6098#note-8. We are now seeing this to be no longer desirable. We agreed that we will remove it and see how performance is.

Actions #58

Updated by Radhika Chippada almost 8 years ago

In branch 10028-functional-full-text-search-indexes at d13cc16

  • Added migration script to recreate full-text search indexes without the leading space (and now the index is being chosen for all searches in my local db)
  • Also corrected the jobs_full_text_search_idx to not include the column "script_parameters_digest" and the container_requests index was way off with too many missing columns
CREATE INDEX collections_full_text_search_idx ON collections USING gin(to_tsvector('english', coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(portable_data_hash,'') || ' ' || coalesce(uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(file_names,'')));

CREATE INDEX container_requests_full_text_search_idx ON container_requests USING gin(to_tsvector('english', coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(requesting_container_uuid,'') || ' ' || coalesce(container_uuid,'') || ' ' || coalesce(mounts,'') || ' ' || coalesce(runtime_constraints,'') || ' ' || coalesce(container_image,'') || ' ' || coalesce(environment,'') || ' ' || coalesce(cwd,'') || ' ' || coalesce(command,'') || ' ' || coalesce(output_path,'') || ' ' || coalesce(filters,'') || ' ' || coalesce(scheduling_parameters,'') || ' ' || coalesce(output_uuid,'') || ' ' || coalesce(log_uuid,'')));

CREATE INDEX groups_full_text_search_idx ON groups USING gin(to_tsvector('english', coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(group_class,'')));

CREATE INDEX jobs_full_text_search_idx ON jobs USING gin(to_tsvector('english', coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(submit_id,'') || ' ' || coalesce(script,'') || ' ' || coalesce(script_version,'') || ' ' || coalesce(script_parameters,'') || ' ' || coalesce(cancelled_by_client_uuid,'') || ' ' || coalesce(cancelled_by_user_uuid,'') || ' ' || coalesce(output,'') || ' ' || coalesce(is_locked_by_uuid,'') || ' ' || coalesce(log,'') || ' ' || coalesce(tasks_summary,'') || ' ' || coalesce(runtime_constraints,'') || ' ' || coalesce(repository,'') || ' ' || coalesce(supplied_script_version,'') || ' ' || coalesce(docker_image_locator,'') || ' ' || coalesce(description,'') || ' ' || coalesce(state,'') || ' ' || coalesce(arvados_sdk_version,'') || ' ' || coalesce(components,'')));

CREATE INDEX pipeline_instances_full_text_search_idx ON pipeline_instances USING gin(to_tsvector('english', coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(pipeline_template_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(properties,'') || ' ' || coalesce(state,'') || ' ' || coalesce(components_summary,'') || ' ' || coalesce(description,'')));

CREATE INDEX pipeline_templates_full_text_search_idx ON pipeline_templates USING gin(to_tsvector('english', coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(components,'') || ' ' || coalesce(description,'')));

CREATE INDEX workflows_full_text_search_idx ON workflows USING gin(to_tsvector('english', coalesce(uuid,'') || ' ' || coalesce(owner_uuid,'') || ' ' || coalesce(modified_by_client_uuid,'') || ' ' || coalesce(modified_by_user_uuid,'') || ' ' || coalesce(name,'') || ' ' || coalesce(description,'') || ' ' || coalesce(definition,'')));

Actions #59

Updated by Javier Bértoli almost 8 years ago

LGTM. Ready to merge

Actions #60

Updated by Nico César almost 8 years ago

  • Status changed from In Progress to Resolved
Actions #61

Updated by Radhika Chippada almost 8 years ago

  • Status changed from Resolved to In Progress

The full-text search was super fast in qr1hi on 24th morning (after the deploy). It took about 5 seconds after the search dialog is shown (I searched by typing for hash in the search box on desktop). This morning it took the “usual” 2 mins. So, the issue may not totally be about the indexes alone.

I came across this upon further research: https://eng.uber.com/mysql-migration/ which led to the following:

https://www.postgresql.org/docs/9.1/static/sql-vacuum.html

“VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables.” . . . . “We recommend that active production databases be vacuumed frequently (at least nightly), in order to remove dead rows. After adding or deleting a large number of rows, it might be a good idea to issue a VACUUM ANALYZE command for the affected table. This will update the system catalogs with the results of all recent changes, and allow the PostgreSQL query planner to make better choices in planning queries.”

So, I am wondering if we vacuum regularly and if not we should add this as well?

Actions #62

Updated by Javier Bértoli almost 8 years ago

Postgres performs vacuum operations automatically since version ~9.0. In fact, is enabled on qr1hi:

postgres 24384  0.0  0.4 267448 140200 ?       Ss   Jul07   5:31 postgres: autovacuum launcher process                                                                                      

That said, AFAIK vacuuming does not affect index searches, only operations on the tables themselves.

MY bet is that, again, you hit another search that does not have a proper index (perhaps different row order?). If you can provide me the SQL search operation, I can try to find what's going on.

Actions #63

Updated by Radhika Chippada almost 8 years ago

Javier: it is the same "test" on 24th and this morning.

For you to research into it further: Just login to qr1hi using https://workbench.qr1hi.arvadosapi.com. You will now be in dashboard in workbench. Just type hash in the search box in dashboard and click on the search icon. This will open the search dialog and eventually show you the results. While this is happening, you can monitor the db logs to see what queries were executed. This is purely touching the full-text search path and no other queries are expected to be executed (look for hash:* in the SQL queries on the server). Hope this helps. Thanks.

Actions #64

Updated by Javier Bértoli almost 8 years ago

Well, my bet was wrong. After more tests and research I found that:

#. The production database server has the parameter effective_cache_size set to 128MB, while the suggested, conservative, value is ~HALF_AVAILABLE_RAM. After discussing it with Nico César we raised it to 8GB.
#. I've been struggling to understand WHY some queries run fast and others don't, and after checking with @lucas, we found that full text searches are not that good for prefix searches (like hash:*). In this link there are some explanation on this issue and also a suggestion to use trigram functions (https://www.postgresql.org/docs/current/static/pgtrgm.html).
#. The planner is doing the right thing. The only problem is that the right thing is not the thing we want:

In the table pipeline_instances, total records:15292, the query:

SELECT COUNT(DISTINCT "pipeline_instances"."id")
  FROM "pipeline_instances" 
  WHERE (
          (to_tsvector('english',
           coalesce(uuid,'')                    || ' ' ||
           coalesce(owner_uuid,'')              || ' ' ||
           coalesce(modified_by_client_uuid,'') || ' ' ||
           coalesce(modified_by_user_uuid,'')   || ' ' ||
           coalesce(pipeline_template_uuid,'')  || ' ' ||
           coalesce(name,'')                    || ' ' ||
           coalesce(components,'')              || ' ' ||
           coalesce(properties,'')              || ' ' ||
           coalesce(state,'')                   || ' ' ||
           coalesce(components_summary,'')      || ' ' ||
           coalesce(description,'')) @@ to_tsquery('english','hash:*')
  )
);

takes 75 seconds and the search returns

 count 
-------
 14235
(1 row)

Time: 75310.816 ms

Having to return almost all the database (notice that the filter removes only ~1k rows) is the reason why the planner considers it's better to run a sequential scan:

 Aggregate  (cost=3487.76..3487.77 rows=1 width=4) (actual time=75068.882..75068.883 rows=1 loops=1)
   ->  Seq Scan on pipeline_instances  (cost=0.00..3450.91 rows=14738 width=4)
         (actual time=1.234..75040.915 rows=14235 loops=1)
         Filter: (to_tsvector('english'::regconfig, (((((((((((((((((((((
             COALESCE(uuid, ''::character varying))::text || ' '::text) ||
             (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) ||
             (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) ||
             (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) ||
             (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) ||
             (COALESCE(name, ''::character varying))::text) || ' '::text) ||
             COALESCE(components, ''::text)) || ' '::text) ||
             COALESCE(properties, ''::text)) || ' '::text) ||
             (COALESCE(state, ''::character varying))::text) || ' '::text) ||
             COALESCE(components_summary, ''::text)) || ' '::text) ||
            (COALESCE(description, ''::character varying))::text)) @@ '''hash'':*'::tsquery)
         Rows Removed by Filter: 1057
 Total runtime: 75068.940 ms

On the other hand, when you search for any other less common string or use a full word search instead of a prefix, the planner does what we expect:

 explain analyze SELECT COUNT(DISTINCT "pipeline_instances"."id")
  FROM "pipeline_instances" 
  WHERE (
          (to_tsvector('english',
           coalesce(uuid,'')                    || ' ' ||
           coalesce(owner_uuid,'')              || ' ' ||
           coalesce(modified_by_client_uuid,'') || ' ' ||
           coalesce(modified_by_user_uuid,'')   || ' ' ||
           coalesce(pipeline_template_uuid,'')  || ' ' ||
           coalesce(name,'')                    || ' ' ||
           coalesce(components,'')              || ' ' ||
           coalesce(properties,'')              || ' ' ||
           coalesce(state,'')                   || ' ' ||
           coalesce(components_summary,'')      || ' ' ||
           coalesce(description,'')) @@ to_tsquery('english','lucas:*')
  )
);

returns

 count 
-------
     0
(1 row)

Time: 2.005 ms

And even using the hash term, as a full word:

SELECT COUNT(DISTINCT "pipeline_instances"."id")
  FROM "pipeline_instances" 
  WHERE (
          (to_tsvector('english',
           coalesce(uuid,'')                    || ' ' ||
           coalesce(owner_uuid,'')              || ' ' ||
           coalesce(modified_by_client_uuid,'') || ' ' ||
           coalesce(modified_by_user_uuid,'')   || ' ' ||
           coalesce(pipeline_template_uuid,'')  || ' ' ||
           coalesce(name,'')                    || ' ' ||
           coalesce(components,'')              || ' ' ||
           coalesce(properties,'')              || ' ' ||
           coalesce(state,'')                   || ' ' ||
           coalesce(components_summary,'')      || ' ' ||
           coalesce(description,'')) @@ to_tsquery('english','hash')
  )
);
 count 
-------
  5313
(1 row)

Time: 8.085 ms

And the query plan

 Aggregate  (cost=3292.78..3292.79 rows=1 width=4) (actual time=16.131..16.132 rows=1 loops=1)
   ->  Bitmap Heap Scan on pipeline_instances
         (cost=477.22..3279.50 rows=5312 width=4) (actual time=2.840..10.477 rows=5313 loops=1)
         Recheck Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((
            COALESCE(uuid, ''::character varying))::text || ' '::text) ||
            (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(name, ''::character varying))::text) || ' '::text) ||
            COALESCE(components, ''::text)) || ' '::text) ||
            COALESCE(properties, ''::text)) || ' '::text) ||
            (COALESCE(state, ''::character varying))::text) || ' '::text) ||
            COALESCE(components_summary, ''::text)) || ' '::text) ||
            (COALESCE(description, ''::character varying))::text)) @@ '''hash'''::tsquery)
         ->  Bitmap Index Scan on pipeline_instances_full_text_search_idx
                (cost=0.00..475.89 rows=5312 width=0) (actual time=2.475..2.475 rows=6433 loops=1)
               Index Cond: (to_tsvector('english'::regconfig, (((((((((((((((((((((
            COALESCE(uuid, ''::character varying))::text || ' '::text) ||
            (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) ||
            (COALESCE(name, ''::character varying))::text) || ' '::text) ||
            COALESCE(components, ''::text)) || ' '::text) ||
            COALESCE(properties, ''::text)) || ' '::text) ||
            (COALESCE(state, ''::character varying))::text) || ' '::text) ||
            COALESCE(components_summary, ''::text)) || ' '::text) ||
            (COALESCE(description, ''::character varying))::text)) @@ '''hash'''::tsquery)
 Total runtime: 16.196 ms

I'm not completely sure if using trigram searches will help us much, for the same reason I think that perhaps we're using full searches wrong: we're using 'English' collation for searches that seem to be mostly UUIDs and strings that are NOT English words that can be reduced to lexemes.

It's possible that creating an index concatenating all the fields and then using a **LIKE '%whatever%' works better for our case, but the index should be carefully planned, as the index row maximum size is 8k.

Actions #65

Updated by Tom Morris almost 8 years ago

  • Status changed from In Progress to Resolved

We've met the completion criteria specified in Note #1, so I'm marking this as resolved.

Actions

Also available in: Atom PDF