[PATCH] Better Performance for PostgreSQL with large INSERTs - Mailing list pgsql-hackers

From Philipp Marek
Subject [PATCH] Better Performance for PostgreSQL with large INSERTs
Date
Msg-id 770a7d6600de5e1c99d93afba0427c5b@marek.priv.at
Whole thread Raw
Responses Re: [PATCH] Better Performance for PostgreSQL with large INSERTs
List pgsql-hackers
Sometimes, storing documents (eg. PDFs) in a database
is much easier than using a separate storage (like S3, NFS, etc.).

(Because of issues like backup integrity, availability,
service dependencies, access rights, encryption of data, etc..)


With this patch:

```diff
diff --git i/src/backend/libpq/pqcomm.c w/src/backend/libpq/pqcomm.c
index e517146..936b073 100644
--- i/src/backend/libpq/pqcomm.c
+++ w/src/backend/libpq/pqcomm.c
@@ -117,7 +117,8 @@ static List *sock_paths = NIL;
   */

  #define PQ_SEND_BUFFER_SIZE 8192
-#define PQ_RECV_BUFFER_SIZE 8192
+#define PQ_RECV_BUFFER_SIZE 2097152
+

  static char *PqSendBuffer;
  static int     PqSendBufferSize;       /* Size send buffer */
```

ie. changing the network receive buffer size from 8KB to 2MB,
got 7% better INSERT performance when storing BLOBs.


The 2MB value is just what we tried, 128kB or 256kB works as well.
The main point is to reduce the number of syscalls for receiving data
to about half of what it is with 8KB.


Thank you for your consideration!


Regards,

Phil



pgsql-hackers by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: index prefetching
Next
From: Kirill Reshke
Date:
Subject: Re: [PATCH] Better Performance for PostgreSQL with large INSERTs