Re: Searching in varchar column having 100M records - Mailing list pgsql-performance

From Andreas Kretschmer
Subject Re: Searching in varchar column having 100M records
Date
Msg-id 5322aa5e-9913-5471-7254-c5fff6c09146@a-kretschmer.de
Whole thread Raw
In response to Re: Searching in varchar column having 100M records  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
List pgsql-performance

Am 17.07.19 um 14:48 schrieb Tomas Vondra:
> Either that, or try creating a covering index, so that the query can 
> do an
> index-only scan. That might reduce the amount of IO against the table, 
> and
> in the index the data should be located close to each other (same page or
> pages close to each other).
>
> So try something like
>
>    CREATE INDEX ios_idx ON table (field, user_id);
>
> and make sure the table is vacuumed often enough (so that the visibility
> map is up to date). 

yeah, and please don't use varchar(64), but instead UUID for the user_id 
- field to save space on disk and for faster comparison.


Regards, Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




pgsql-performance by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: Searching in varchar column having 100M records
Next
From: "David G. Johnston"
Date:
Subject: Re: Searching in varchar column having 100M records