Re: [HACKERS] Parallel Bitmap scans a bit broken - Mailing list pgsql-hackers
From | Emre Hasegeli |
---|---|
Subject | Re: [HACKERS] Parallel Bitmap scans a bit broken |
Date | |
Msg-id | CAE2gYzw013Dn4VRYjDXPFCq6-XhfhseXo2J4vgAr63K=0Ox_DQ@mail.gmail.com Whole thread Raw |
In response to | Re: [HACKERS] Parallel Bitmap scans a bit broken (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: [HACKERS] Parallel Bitmap scans a bit broken
Re: [HACKERS] Parallel Bitmap scans a bit broken |
List | pgsql-hackers |
> I don't know if this is the only problem > I'll be in this general area today, so will mention if I stumble over > anything that looks broken. I was testing the same patch with a large dataset and got a different segfault: > hasegeli=# explain select * from only mp_notification_20170225 where server_id = 7; > QUERY PLAN > ---------------------------------------------------------------------------------------------------------- > Gather (cost=26682.94..476995.88 rows=1 width=215) > Workers Planned: 2 > -> Parallel Bitmap Heap Scan on mp_notification_20170225 (cost=25682.94..475995.78 rows=1 width=215) > Recheck Cond: (server_id = 7) > -> Bitmap Index Scan on mp_notification_block_idx (cost=0.00..25682.94 rows=4557665 width=0) > Index Cond: (server_id = 7) > (6 rows) > > hasegeli=# select * from only mp_notification_20170225 where server_id = 7; > server closed the connection unexpectedly > This probably means the server terminated abnormally > before or while processing the request. > The connection to the server was lost. Attempting reset: Failed. > * thread #1: tid = 0x5045a8f, 0x000000010ae44558 postgres`brin_deform_tuple(brdesc=0x00007fea3c86a3a8, tuple=0x00007fea3c891040)+ 40 at brin_tuple.c:414, queue = 'com.apple.main-thread', stop reason = signal SIGUSR1 > * frame #0: 0x000000010ae44558 postgres`brin_deform_tuple(brdesc=0x00007fea3c86a3a8, tuple=0x00007fea3c891040) + 40 atbrin_tuple.c:414 [opt] > frame #1: 0x000000010ae4000c postgres`bringetbitmap(scan=0x00007fea3c875c20, tbm=<unavailable>) + 428 at brin.c:398[opt] > frame #2: 0x000000010ae9b451 postgres`index_getbitmap(scan=0x00007fea3c875c20, bitmap=<unavailable>) + 65 at indexam.c:726[opt] > frame #3: 0x000000010b0035a9 postgres`MultiExecBitmapIndexScan(node=<unavailable>) + 233 at nodeBitmapIndexscan.c:91[opt] > frame #4: 0x000000010b002840 postgres`BitmapHeapNext(node=<unavailable>) + 400 at nodeBitmapHeapscan.c:143 [opt] > frame #5: 0x000000010afef5d0 postgres`ExecProcNode(node=0x00007fea3c873948) + 224 at execProcnode.c:459 [opt] > frame #6: 0x000000010b004cc9 postgres`ExecGather [inlined] gather_getnext(gatherstate=<unavailable>) + 520 at nodeGather.c:276[opt] > frame #7: 0x000000010b004ac1 postgres`ExecGather(node=<unavailable>) + 497 at nodeGather.c:212 [opt] > frame #8: 0x000000010afef6b2 postgres`ExecProcNode(node=0x00007fea3c872f58) + 450 at execProcnode.c:541 [opt] > frame #9: 0x000000010afeaf90 postgres`standard_ExecutorRun [inlined] ExecutePlan(estate=<unavailable>, planstate=<unavailable>,use_parallel_mode=<unavailable>, operation=<unavailable>, numberTuples=0, direction=<unavailable>,dest=<unavailable>) + 29 at execMain.c:1616 [opt] > frame #10: 0x000000010afeaf73 postgres`standard_ExecutorRun(queryDesc=<unavailable>, direction=<unavailable>, count=0)+ 291 at execMain.c:348 [opt] > frame #11: 0x000000010af8b108 postgres`ExplainOnePlan(plannedstmt=0x00007fea3c871040, into=0x0000000000000000, es=0x00007fea3c805360,queryString=<unavailable>, params=<unavailable>, planduration=<unavailable>) + 328 at explain.c:533[opt] > frame #12: 0x000000010af8ab98 postgres`ExplainOneQuery(query=0x00007fea3c805890, cursorOptions=<unavailable>, into=0x0000000000000000,es=0x00007fea3c805360, queryString=<unavailable>,params=0x0000000000000000) + 280 at explain.c:369[opt] > frame #13: 0x000000010af8a773 postgres`ExplainQuery(pstate=<unavailable>, stmt=0x00007fea3d005450, queryString="explainanalyze select * from only mp_notification_20170225 where server_id > 6;",params=0x0000000000000000,dest=0x00007fea3c8052c8) + 819 at explain.c:254 [opt] > frame #14: 0x000000010b13b660 postgres`standard_ProcessUtility(pstmt=0x00007fea3d005fa8, queryString="explain analyzeselect * from only mp_notification_20170225 where server_id > 6;",context=PROCESS_UTILITY_TOPLEVEL, params=0x0000000000000000,dest=0x00007fea3c8052c8, completionTag=<unavailable>) + 1104 at utility.c:675 [opt] > frame #15: 0x000000010b13ad2a postgres`PortalRunUtility(portal=0x00007fea3c837640, pstmt=0x00007fea3d005fa8, isTopLevel='\x01',setHoldSnapshot=<unavailable>, dest=0x00007fea3c8052c8, completionTag=<unavailable>) + 90 at pquery.c:1165[opt] > frame #16: 0x000000010b139f56 postgres`FillPortalStore(portal=0x00007fea3c837640, isTopLevel='\x01') + 182 at pquery.c:1025[opt] > frame #17: 0x000000010b139c22 postgres`PortalRun(portal=0x00007fea3c837640, count=<unavailable>, isTopLevel='\x01',dest=<unavailable>, altdest=<unavailable>, completionTag=<unavailable>) + 402 at pquery.c:757 [opt] > frame #18: 0x000000010b13789b postgres`PostgresMain + 44 at postgres.c:1101 [opt] > frame #19: 0x000000010b13786f postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, dbname=<unavailable>, username=<unavailable>)+ 8927 at postgres.c:4066 [opt] > frame #20: 0x000000010b0ba113 postgres`PostmasterMain [inlined] BackendRun + 7587 at postmaster.c:4317 [opt] > frame #21: 0x000000010b0ba0e8 postgres`PostmasterMain [inlined] BackendStartup at postmaster.c:3989 [opt] > frame #22: 0x000000010b0ba0e8 postgres`PostmasterMain at postmaster.c:1729 [opt] > frame #23: 0x000000010b0ba0e8 postgres`PostmasterMain(argc=<unavailable>, argv=<unavailable>) + 7544 at postmaster.c:1337[opt] > frame #24: 0x000000010b0332af postgres`main(argc=<unavailable>, argv=<unavailable>) + 1567 at main.c:228 [opt] > frame #25: 0x00007fffb4e28255 libdyld.dylib`start + 1 I can try to provide a test case, if that wouldn't be enough to spot the problem.
pgsql-hackers by date: