Services
24×7×365 Technical Support
Migration to PostgreSQL
High Availability Deployment
Database Audit
Remote DBA for PostgreSQL
Products
Postgres Pro Enterprise
Postgres Pro Standard
Cloud Solutions
Postgres Extensions
Resources
Blog
Documentation
Webinars
Videos
Presentations
Community
Events
Training Courses
Books
Demo Database
Mailing List Archives
About
Leadership team
Partners
Customers
In the News
Press Releases
Press Info
Services
24×7×365 Technical Support
Migration to PostgreSQL
High Availability Deployment
Database Audit
Remote DBA for PostgreSQL
Products
Postgres Pro Enterprise
Postgres Pro Standard
Cloud Solutions
Postgres Extensions
Resources
Blog
Documentation
Webinars
Videos
Presentations
Community
Events
Training Courses
Books
Demo Database
Mailing List Archives
About
Leadership team
Partners
Customers
In the News
Press Releases
Press Info
Facebook
Downloads
Home
>
mailing lists
Re: pgsql: Modify tqueue infrastructure to support transient record types. - Mailing list pgsql-committers
From
Amit Kapila
Subject
Re: pgsql: Modify tqueue infrastructure to support transient record types.
Date
November 9, 2015
13:19:02
Msg-id
CAA4eK1+ZQxNnU_RSTtc6edYuV9=aJJWXsFYZ_bs-vsJGDSHm+w@mail.gmail.com
Whole thread
Raw
In response to
pgsql: Modify tqueue infrastructure to support transient record types.
(Robert Haas <rhaas@postgresql.org>)
Responses
Re: pgsql: Modify tqueue infrastructure to support transient record types.
List
pgsql-committers
Tree view
On Sat, Nov 7, 2015 at 3:29 AM, Robert Haas <
rhaas@postgresql.org
> wrote:
>
> Modify tqueue infrastructure to support transient record types.
>
+static HeapTuple
+gather_readnext(GatherState *gatherstate)
+{
..
+ if (readerdone)
+ {
+ DestroyTupleQueueReader(reader);
+ --gatherstate->nreaders;
+ if (gatherstate->nreaders == 0)
+ {
+ ExecShutdownGather(gatherstate);
+ return NULL;
+ }
..
}
I think after readers are done, it's not good to call ShutdownGather,
because it will destroy the parallel context as well and the same is
needed for the cases when after the readers are done we still need
to access dsm, like for rescan and for scanning the data from local
node.
Here, we should just shutdown the workers and that is what we were
doing previous to this commit. Attached patch fixes this problem.
With Regards,
Amit Kapila.
EnterpriseDB:
http://www.enterprisedb.com
Attachment
fix_gather_shutdown_workers_v1.patch
pgsql-committers
by date:
Previous
From:
Amit Kapila
Date:
09 November 2015, 09:06:57
Subject:
Re: pgsql: Modify tqueue infrastructure to support transient record types.
Next
From:
Peter Eisentraut
Date:
09 November 2015, 15:21:56
Subject:
pgsql: Translation updates
Есть вопросы? Напишите нам!
Соглашаюсь с условиями обработки персональных данных
I confirm that I have read and accepted PostgresPro’s
Privacy Policy
.
I agree to get Postgres Pro discount offers and other marketing communications.
✖
×
×
Everywhere
Documentation
Mailing list
List:
all lists
pgsql-general
pgsql-hackers
buildfarm-members
pgadmin-hackers
pgadmin-support
pgsql-admin
pgsql-advocacy
pgsql-announce
pgsql-benchmarks
pgsql-bugs
pgsql-chat
pgsql-cluster-hackers
pgsql-committers
pgsql-cygwin
pgsql-docs
pgsql-hackers-pitr
pgsql-hackers-win32
pgsql-interfaces
pgsql-jdbc
pgsql-jobs
pgsql-novice
pgsql-odbc
pgsql-patches
pgsql-performance
pgsql-php
pgsql-pkg-debian
pgsql-pkg-yum
pgsql-ports
pgsql-rrreviewers
pgsql-ru-general
pgsql-sql
pgsql-students
pgsql-testers
pgsql-translators
pgsql-www
psycopg
Period
anytime
within last day
within last week
within last month
within last 6 months
within last year
Sort by
date
reverse date
rank
Services
24×7×365 Technical Support
Migration to PostgreSQL
High Availability Deployment
Database Audit
Remote DBA for PostgreSQL
Products
Postgres Pro Enterprise
Postgres Pro Standard
Cloud Solutions
Postgres Extensions
Resources
Blog
Documentation
Webinars
Videos
Presentations
Community
Events
Training Courses
Books
Demo Database
Mailing List Archives
About
Leadership team
Partners
Customers
In the News
Press Releases
Press Info
By continuing to browse this website, you agree to the use of cookies. Go to
Privacy Policy
.
I accept cookies