Thread: BUG #15250: ERROR: could not find pathkey item to sort
The following bug has been logged on the website: Bug reference: 15250 Logged by: Sara Pranke Email address: sara.pranke@gmail.com PostgreSQL version: 9.4.17 Operating system: Debian 8.0 Description: I have complex queries in large tables that use memory for sorting items, we upgrate PostgreSQL from 9.1 to 9.4 and in this version we get this error: 'could not find pathkey item to sort' I increased the work_mem to 1GB and this improved the performance and avoid the error, but with the pass of the day is like the memory is not liberate and crash to another queries with sorting itens, causing the error. I searching about this error and the last report is in version 9.3.5. In some point (something about of end at the day) the memory is liberate and is possible execute the same queries that was not possible before. We have replication and the config is like this; shared_buffers = 4084037kB work_mem = 1000MB checkpoint_segments = 96 checkpoint_completion_target = 0.9 wal_buffers = 16MB listen_addresses = '*' wal_level = 'hot_standby' archive_mode = on archive_command = 'cd .' max_wal_senders = 4 hot_standby = on wal_keep_segments = 32 effective_cache_size = 10GB maintenance_work_mem = 1GB default_statistics_target = 100 random_page_cost = 1.1 effective_io_concurrency = 200 max_connections = 500 superuser_reserved_connections = 5 dynamic_shared_memory_type = posix
On 2018/06/21 6:01, PG Bug reporting form wrote: > The following bug has been logged on the website: > > Bug reference: 15250 > Logged by: Sara Pranke > Email address: sara.pranke@gmail.com > PostgreSQL version: 9.4.17 > Operating system: Debian 8.0 > Description: > > I have complex queries in large tables that use memory for sorting items, we > upgrate PostgreSQL from 9.1 to 9.4 and in this version we get this error: > 'could not find pathkey item to sort' It's hard to help without seeing the query that's caused this error. Could you share the query? Thanks, Amit
=?utf-8?q?PG_Bug_reporting_form?= <noreply@postgresql.org> writes: > I have complex queries in large tables that use memory for sorting items, we > upgrate PostgreSQL from 9.1 to 9.4 and in this version we get this error: > 'could not find pathkey item to sort' That's a fairly common visible symptom for erroneous construction of a plan inside the planner. It has nothing directly to do with memory consumption, although it's possible that changing work_mem would encourage the planner to choose a different plan shape that avoids the bug. Another likely explanation for the problem appearing and disappearing is that it could be sensitive to the current statistics for the table(s) the query is on, in which case a background auto-analyze might be enough to flip you from the plan shape with the problem to the plan shape without. Anyway, we'd certainly be interested to fix the problem if you can provide a self-contained, reproducible test case. I'd suggest trying to generate some dummy data that the query fails on. regards, tom lane
Hi, Tom!
I will be construct a local lab trying to reproduce this error, because today occur only in client machine.
At the moment my only default is the memory consumed by the Sort plan, because when I excluded the clauses I saw in EXPLAIN ANALYZE consuming memory, the query worked.
Ha, I have one more guest, if PostgreSQL can't access disk memory after consume or estimate more than available in work_mem, maybe this cause the error. I don't know, just a guest.
When I reproduce the error in lab I share in here more infos.
Thanks,
Sara
2018-06-20 22:44 GMT-03:00 Tom Lane <tgl@sss.pgh.pa.us>:
PG Bug reportingform <noreply@postgresql.org> writes:
> I have complex queries in large tables that use memory for sorting items, we
> upgrate PostgreSQL from 9.1 to 9.4 and in this version we get this error:
> 'could not find pathkey item to sort'
That's a fairly common visible symptom for erroneous construction of a
plan inside the planner. It has nothing directly to do with memory
consumption, although it's possible that changing work_mem would encourage
the planner to choose a different plan shape that avoids the bug.
Another likely explanation for the problem appearing and disappearing is
that it could be sensitive to the current statistics for the table(s)
the query is on, in which case a background auto-analyze might be enough
to flip you from the plan shape with the problem to the plan shape
without.
Anyway, we'd certainly be interested to fix the problem if you can provide
a self-contained, reproducible test case. I'd suggest trying to generate
some dummy data that the query fails on.
regards, tom lane
On 22 June 2018 at 03:05, Sara Pranke <sara.pranke@gmail.com> wrote: > I will be construct a local lab trying to reproduce this error, because > today occur only in client machine. > At the moment my only default is the memory consumed by the Sort plan, > because when I excluded the clauses I saw in EXPLAIN ANALYZE consuming > memory, the query worked. > > Ha, I have one more guest, if PostgreSQL can't access disk memory after > consume or estimate more than available in work_mem, maybe this cause the > error. I don't know, just a guest. This error came entirely from the planner, so what may have happened during execution would have no effect. The query just happened to not get that far. You should be able to recreate using the same schema+query that failed. The actual data might not be too important, although that likely has driven the shape of the plan. You may be able to coax the planner into generating the same plan with empty tables by disabling various enable_* GUCs. For example if the failing plan has a Merge Join, then you could SET enable_hashjoin = 0; SET enable_nestloop = 0; Although, if the plan is very complex then you may not have luck since it may be caused by a combination of different join types, which might be difficult to control without the data. If you can recreate without data, then it would be great to see the entire schema plus failing query. If further simplification of the query is possible, e.g removing joins, columns or indexes that don't need to be there, then that may be helpful, but probably not necessary. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services