Updating a large table - Mailing list pgsql-performance

From Timokhin Maxim
Subject Updating a large table
Date
Msg-id 145941513961163@web46o.yandex.ru
Whole thread Raw
Responses Re: Updating a large table
Re: Updating a large table
List pgsql-performance
Hello! We have a large table 11GB ( about 37 million records ) and we need to alter a table - add a new column with
defaultvalues is false. Also 'NOT NULL' is required.
 
So, first I've done:

ALTER TABLE clusters ALTER COLUMN "is_paid";

after that:

UPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE;

Everything went ok. Then I tried to run it again for an interval of 1 years. And I got that no one can't see - the was
noavailable space on a disk. The reason was WAL-files ate everything.
 
Master-server couldn't send some WAL-file to their replicas. Bandwidth wasn't enough.

Well, I'm searching for a better idea to update the table.
Solutions I found.
1. Separate my UPDATE by chunks.
2. Alter a table using a new temporary table, but it's not convenient for me because there is a lot of foreign keys and
indexes.
3. Hot-update. This is the most interesting case for me.
Speaking of HOT-update
https://www.dbrnd.com/2016/03/postgresql-the-awesome-table-fillfactor-to-speedup-update-and-select-statement/
The article says: it might be useful for tables that change often and moreover It would be the best way to increase the
speedof UPDATE.
 
So, my questions are will it work for all tuples? It says that - no
https://www.dbrnd.com/2016/03/postgresql-alter-table-to-change-fillfactor-value/,but I could not find a confirmation in
officialpostresql's documentation.
 
Why do I need to launch vacuum after updating?
How should I reduce the better fillfactor?
What will be with WAL-files it this case?
Thank you!



PostgreSQL 9.6

-- 
Timokhin 'maf' Maxim


pgsql-performance by date:

Previous
From: Habib Nahas
Date:
Subject: Re: Autoanalyze CPU usage
Next
From: salah jubeh
Date:
Subject: Re: Updating a large table