Thread: Possible incorrect row estimation for Gather paths

Possible incorrect row estimation for Gather paths

From
Anthonin Bonnefoy
Date:
Hi,

While experimenting on an explain option to display all plan candidates (very rough prototype here [1]), I've noticed some discrepancies in some generated plans.

EXPLAIN (ALL_CANDIDATES) SELECT * FROM pgbench_accounts order by aid;
 Plan 1
->  Gather Merge  (cost=11108.32..22505.38 rows=100000 width=97)
     Workers Planned: 1
     ->  Sort  (cost=10108.31..10255.37 rows=58824 width=97)
           Sort Key: aid
           ->  Parallel Seq Scan on pgbench_accounts  (cost=0.00..2228.24 rows=58824 width=97)
 Plan 2
->  Gather Merge  (cost=11108.32..17873.08 rows=58824 width=97)
     Workers Planned: 1
     ->  Sort  (cost=10108.31..10255.37 rows=58824 width=97)
           Sort Key: aid
           ->  Parallel Seq Scan on pgbench_accounts  (cost=0.00..2228.24 rows=58824 width=97)

The 2 plans are similar except one Gather Merge has a lower 58K estimated rows.

The first plan is created with generate_useful_gather_paths with override_rows=false then create_gather_merge_path and will use rel->rows as the row count (so the 100K rows of pgbench_accounts).
#0: create_gather_merge_path(...) at pathnode.c:1885:30
#1: generate_useful_gather_paths(... override_rows=false) at allpaths.c:3286:11
#2: apply_scanjoin_target_to_paths(...) at planner.c:7744:3
#3: grouping_planner(...) at planner.c:1638:3

The second plan is created through create_ordered_paths then create_gather_merge_path and the number of rows is estimated to (worker_rows * parallel_workers). Since we only have 1 parallel worker, this yields 58K rows.
#0: create_gather_merge_path(...) at pathnode.c:1885:30
#1: create_ordered_paths(...) at planner.c:5287:5
#2: grouping_planner(...) at planner.c:1717:17

The 58K row estimate looks possibly incorrect. A worker row count is estimated using total_rows/parallel_divisor. The parallel_divisor will include the possible leader participation and will be 1.7 for 1 worker thus the 58K rows (100K/1.7=58K)
However, the gather merge will only do 58K*1, dropping the leader participation component.

I have a tentative patch split in two changes:
1: This is a related refactoring to remove an unnecessary and confusing assignment of rows in create_gather_merge_path. This value is never used and immediately overwritten in cost_gather_merge
2: This changes the row estimation of gather path to use worker_rows*parallel_divisor to get a more accurate estimation. Additionally, when creating a gather merge path in create_ordered_paths, we try to use the source's rel rows when available.

The patch triggered a small change in the hash_join regression test. Pre patch, we had the following candidates.
Plan 4
->  Aggregate  (cost=511.01..511.02 rows=1 width=8)
     ->  Gather  (cost=167.02..511.00 rows=2 width=0)
           Workers Planned: 1
           ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
                 Hash Cond: (r.id = s.id)
                 ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
                 ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
                       ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)
Plan 5
->  Finalize Aggregate  (cost=510.92..510.93 rows=1 width=8)
     ->  Gather  (cost=510.80..510.91 rows=1 width=8)
           Workers Planned: 1
           ->  Partial Aggregate  (cost=510.80..510.81 rows=1 width=8)
                 ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
                       Hash Cond: (r.id = s.id)
                       ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
                       ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
                             ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)

Post patch, the plan candidates became:
Plan 4
->  Finalize Aggregate  (cost=511.02..511.03 rows=1 width=8)
     ->  Gather  (cost=510.80..511.01 rows=2 width=8)
           Workers Planned: 1
           ->  Partial Aggregate  (cost=510.80..510.81 rows=1 width=8)
                 ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
                       Hash Cond: (r.id = s.id)
                       ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
                       ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
                             ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)
Plan 5
->  Aggregate  (cost=511.01..511.02 rows=1 width=8)
     ->  Gather  (cost=167.02..511.00 rows=2 width=0)
           Workers Planned: 1
           ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
                 Hash Cond: (r.id = s.id)
                 ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
                 ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
                       ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)

The FinalizeAggregate plan has an increased cost of 1 post patch due to the number of rows in the Gather node that went from 1 to 2 (rint(1 * 1.7)=2). This was enough to make the Agggregate plan cheaper. The test is to check the parallel hash join so updating it with the new cheapest plan looked fine.

Regards,
Anthonin

[1]: https://github.com/bonnefoa/postgres/tree/plan-candidates
Attachment

Re: Possible incorrect row estimation for Gather paths

From
Rafia Sabih
Date:
Hello Anthonin,

I spent some time on this problem and your proposed solution.
As I understand it, this is the correction for the row count when the
number of parallel workers < 4.
Once the number of workers is 4 or more, the output from
parallel_divisor is the same as the number of parallel workers.
And then the number of rows for such cases would be the same with or
without the proposed patch.
So that way I think it is good to fix this case for a smaller number of workers.

But I don't quite understood the purpose of this,
+ total_groups = input_rel->rows;
+
+ /*
+ * If the number of rows is unknown, fallback to gather rows
+ * estimation
+ */
+ if (total_groups == 0)
+ total_groups = gather_rows_estimate(input_path);

why not just use total_groups = gather_rows_estimate(input_path), what
is the importance of having total_groups = input_rel->rows?

With respect to the change introduced by the patch in the regression
test, I wonder if we should test it on the tables of a larger scale
and check for performance issues.
Imagine the case when Parallel Seq Scan on extremely_skewed s
(cost=0.00..167.01 rows=1 width=4) returns 1000 rows instead of 1 ...
I wonder which plan would perform better then or will there be a
totally different plan.

However, my hunch is that there should not be serious problems,
because before this patch the number of estimated rows was incorrect
anyway.

I don't see a problem in merging the two patches.


On Fri, 24 May 2024 at 11:35, Anthonin Bonnefoy
<anthonin.bonnefoy@datadoghq.com> wrote:
>
> Hi,
>
> While experimenting on an explain option to display all plan candidates (very rough prototype here [1]), I've noticed
somediscrepancies in some generated plans. 
>
> EXPLAIN (ALL_CANDIDATES) SELECT * FROM pgbench_accounts order by aid;
>  Plan 1
> ->  Gather Merge  (cost=11108.32..22505.38 rows=100000 width=97)
>      Workers Planned: 1
>      ->  Sort  (cost=10108.31..10255.37 rows=58824 width=97)
>            Sort Key: aid
>            ->  Parallel Seq Scan on pgbench_accounts  (cost=0.00..2228.24 rows=58824 width=97)
>  Plan 2
> ->  Gather Merge  (cost=11108.32..17873.08 rows=58824 width=97)
>      Workers Planned: 1
>      ->  Sort  (cost=10108.31..10255.37 rows=58824 width=97)
>            Sort Key: aid
>            ->  Parallel Seq Scan on pgbench_accounts  (cost=0.00..2228.24 rows=58824 width=97)
>
> The 2 plans are similar except one Gather Merge has a lower 58K estimated rows.
>
> The first plan is created with generate_useful_gather_paths with override_rows=false then create_gather_merge_path
andwill use rel->rows as the row count (so the 100K rows of pgbench_accounts). 
> #0: create_gather_merge_path(...) at pathnode.c:1885:30
> #1: generate_useful_gather_paths(... override_rows=false) at allpaths.c:3286:11
> #2: apply_scanjoin_target_to_paths(...) at planner.c:7744:3
> #3: grouping_planner(...) at planner.c:1638:3
>
> The second plan is created through create_ordered_paths then create_gather_merge_path and the number of rows is
estimatedto (worker_rows * parallel_workers). Since we only have 1 parallel worker, this yields 58K rows. 
> #0: create_gather_merge_path(...) at pathnode.c:1885:30
> #1: create_ordered_paths(...) at planner.c:5287:5
> #2: grouping_planner(...) at planner.c:1717:17
>
> The 58K row estimate looks possibly incorrect. A worker row count is estimated using total_rows/parallel_divisor. The
parallel_divisorwill include the possible leader participation and will be 1.7 for 1 worker thus the 58K rows
(100K/1.7=58K)
> However, the gather merge will only do 58K*1, dropping the leader participation component.
>
> I have a tentative patch split in two changes:
> 1: This is a related refactoring to remove an unnecessary and confusing assignment of rows in
create_gather_merge_path.This value is never used and immediately overwritten in cost_gather_merge 
> 2: This changes the row estimation of gather path to use worker_rows*parallel_divisor to get a more accurate
estimation.Additionally, when creating a gather merge path in create_ordered_paths, we try to use the source's rel rows
whenavailable. 
>
> The patch triggered a small change in the hash_join regression test. Pre patch, we had the following candidates.
> Plan 4
> ->  Aggregate  (cost=511.01..511.02 rows=1 width=8)
>      ->  Gather  (cost=167.02..511.00 rows=2 width=0)
>            Workers Planned: 1
>            ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
>                  Hash Cond: (r.id = s.id)
>                  ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
>                  ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
>                        ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)
> Plan 5
> ->  Finalize Aggregate  (cost=510.92..510.93 rows=1 width=8)
>      ->  Gather  (cost=510.80..510.91 rows=1 width=8)
>            Workers Planned: 1
>            ->  Partial Aggregate  (cost=510.80..510.81 rows=1 width=8)
>                  ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
>                        Hash Cond: (r.id = s.id)
>                        ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
>                        ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
>                              ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)
>
> Post patch, the plan candidates became:
> Plan 4
> ->  Finalize Aggregate  (cost=511.02..511.03 rows=1 width=8)
>      ->  Gather  (cost=510.80..511.01 rows=2 width=8)
>            Workers Planned: 1
>            ->  Partial Aggregate  (cost=510.80..510.81 rows=1 width=8)
>                  ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
>                        Hash Cond: (r.id = s.id)
>                        ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
>                        ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
>                              ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)
> Plan 5
> ->  Aggregate  (cost=511.01..511.02 rows=1 width=8)
>      ->  Gather  (cost=167.02..511.00 rows=2 width=0)
>            Workers Planned: 1
>            ->  Parallel Hash Join  (cost=167.02..510.80 rows=1 width=0)
>                  Hash Cond: (r.id = s.id)
>                  ->  Parallel Seq Scan on simple r  (cost=0.00..299.65 rows=11765 width=4)
>                  ->  Parallel Hash  (cost=167.01..167.01 rows=1 width=4)
>                        ->  Parallel Seq Scan on extremely_skewed s  (cost=0.00..167.01 rows=1 width=4)
>
> The FinalizeAggregate plan has an increased cost of 1 post patch due to the number of rows in the Gather node that
wentfrom 1 to 2 (rint(1 * 1.7)=2). This was enough to make the Agggregate plan cheaper. The test is to check the
parallelhash join so updating it with the new cheapest plan looked fine. 
>
> Regards,
> Anthonin
>
> [1]: https://github.com/bonnefoa/postgres/tree/plan-candidates



--
Regards,
Rafia Sabih



Re: Possible incorrect row estimation for Gather paths

From
Anthonin Bonnefoy
Date:
Hi Rafia,

Thanks for reviewing.

On Wed, Jul 10, 2024 at 4:54 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:
> But I don't quite understood the purpose of this,
> + total_groups = input_rel->rows;
> +
> + /*
> + * If the number of rows is unknown, fallback to gather rows
> + * estimation
> + */
> + if (total_groups == 0)
> + total_groups = gather_rows_estimate(input_path);
>
> why not just use total_groups = gather_rows_estimate(input_path), what
> is the importance of having total_groups = input_rel->rows?

The initial goal was to use the source tuples if available and avoid
possible rounding errors. Though I realise that the difference would
be minimal. For example, 200K tuples and 3 workers would yield
int(int(200000 / 2.4) * 2.4)=199999. That is probably not worth the
additional complexity, I've updated the patch to just use
gather_rows_estimate.

> With respect to the change introduced by the patch in the regression
> test, I wonder if we should test it on the tables of a larger scale
> and check for performance issues.
> Imagine the case when Parallel Seq Scan on extremely_skewed s
> (cost=0.00..167.01 rows=1 width=4) returns 1000 rows instead of 1 ...
> I wonder which plan would perform better then or will there be a
> totally different plan.
For the extremely_skewed test, having the parallel seq scan returning
more rows won't impact the Gather since The Hash Join will reduce the
number of rows to 1. I've found an example where we can see the plan
changes with the default settings:

CREATE TABLE simple (id SERIAL PRIMARY KEY, other bigint);
insert into simple select generate_series(1,500000), ceil(random()*100);
analyze simple;
EXPLAIN SELECT * FROM simple where other < 10 order by id;

Unpatched:
 Gather Merge  (cost=9377.85..12498.60 rows=27137 width=12)
   Workers Planned: 1
   ->  Sort  (cost=8377.84..8445.68 rows=27137 width=12)
         Sort Key: id
         ->  Parallel Seq Scan on simple  (cost=0.00..6379.47
rows=27137 width=12)
               Filter: (other < 10)

Patched:
 Sort  (cost=12381.73..12492.77 rows=44417 width=12)
   Sort Key: id
   ->  Seq Scan on simple  (cost=0.00..8953.00 rows=44417 width=12)
         Filter: (other < 10)

Looking at the candidates, the previous Gather Merge now has an
estimated number of rows of 44418. The 1 difference compared to the
other Gather Merge plan is due to rounding (26128 * 1.7 = 44417.6).

 Plan 3
   ->  Gather Merge  (cost=9296.40..14358.75 rows=44418 width=12)
         Workers Planned: 1
         ->  Sort  (cost=8296.39..8361.71 rows=26128 width=12)
               Sort Key: id
               ->  Parallel Seq Scan on simple  (cost=0.00..6379.47
rows=26128 width=12)
                     Filter: (other < 10)
 Plan 4
   ->  Gather Merge  (cost=9296.40..14358.63 rows=44417 width=12)
         Workers Planned: 1
         ->  Sort  (cost=8296.39..8361.71 rows=26128 width=12)
               Sort Key: id
               ->  Parallel Seq Scan on simple  (cost=0.00..6379.47
rows=26128 width=12)
                     Filter: (other < 10)
 Plan 5
   ->  Sort  (cost=12381.73..12492.77 rows=44417 width=12)
         Sort Key: id
         ->  Seq Scan on simple  (cost=0.00..8953.00 rows=44417 width=12)
               Filter: (other < 10)

The Sort plan is slightly slower than the Gather Merge plan: 100ms
average against 83ms but the Gather Merge comes at the additional cost
of creating and using a parallel worker. The unpatched row estimation
makes the parallel plan look cheaper and running a parallel query for
a 17ms improvement doesn't seem like a good trade.

I've also realised from the comments in optimizer.h that
nodes/pathnodes.h should not be included there and fixed it.

Regards,
Anthonin

Attachment

Re: Possible incorrect row estimation for Gather paths

From
Richard Guo
Date:
I can reproduce this problem with the query below.

explain (costs on) select * from tenk1 order by twenty;
                                   QUERY PLAN
---------------------------------------------------------------------------------
 Gather Merge  (cost=772.11..830.93 rows=5882 width=244)
   Workers Planned: 1
   ->  Sort  (cost=772.10..786.80 rows=5882 width=244)
         Sort Key: twenty
         ->  Parallel Seq Scan on tenk1  (cost=0.00..403.82 rows=5882 width=244)
(5 rows)

On Tue, Jul 16, 2024 at 3:56 PM Anthonin Bonnefoy
<anthonin.bonnefoy@datadoghq.com> wrote:
> The initial goal was to use the source tuples if available and avoid
> possible rounding errors. Though I realise that the difference would
> be minimal. For example, 200K tuples and 3 workers would yield
> int(int(200000 / 2.4) * 2.4)=199999. That is probably not worth the
> additional complexity, I've updated the patch to just use
> gather_rows_estimate.

I wonder if the changes in create_ordered_paths should also be reduced
to 'total_groups = gather_rows_estimate(path);'.

> I've also realised from the comments in optimizer.h that
> nodes/pathnodes.h should not be included there and fixed it.

I think perhaps it's better to declare gather_rows_estimate() in
cost.h rather than optimizer.h.
(BTW, I wonder if compute_gather_rows() would be a better name?)

I noticed another issue in generate_useful_gather_paths() -- *rowsp
would have a random value if override_rows is true and we use
incremental sort for gather merge.  I think we should fix this too.

Thanks
Richard



Re: Possible incorrect row estimation for Gather paths

From
Anthonin Bonnefoy
Date:
On Wed, Jul 17, 2024 at 3:59 AM Richard Guo <guofenglinux@gmail.com> wrote:
>
> I can reproduce this problem with the query below.
>
> explain (costs on) select * from tenk1 order by twenty;
>                                    QUERY PLAN
> ---------------------------------------------------------------------------------
>  Gather Merge  (cost=772.11..830.93 rows=5882 width=244)
>    Workers Planned: 1
>    ->  Sort  (cost=772.10..786.80 rows=5882 width=244)
>          Sort Key: twenty
>          ->  Parallel Seq Scan on tenk1  (cost=0.00..403.82 rows=5882 width=244)
> (5 rows)
I was looking for a test to add in the regress checks that wouldn't
rely on explain cost since it is disabled. However, I've realised I
could do something similar to what's done in stats_ext with the
check_estimated_rows function. I've added the get_estimated_rows test
function that extracts the estimated rows from the top node and uses
it to check the gather nodes' estimates. get_estimated_rows uses a
simple explain compared to check_estimated_rows which relies on an
explain analyze.

> I wonder if the changes in create_ordered_paths should also be reduced
> to 'total_groups = gather_rows_estimate(path);'.

It should already be the case with the patch v2. It does create
rounding errors that are visible in the tests but they shouldn't
exceed +1 or -1.

> I think perhaps it's better to declare gather_rows_estimate() in
> cost.h rather than optimizer.h.
> (BTW, I wonder if compute_gather_rows() would be a better name?)

Good point, I've moved the declaration and renamed it.

> I noticed another issue in generate_useful_gather_paths() -- *rowsp
> would have a random value if override_rows is true and we use
> incremental sort for gather merge.  I think we should fix this too.

That seems to be the case. I've tried to find a query that could
trigger this codepath without success. All grouping and distinct paths
I've tried where fully sorted and didn't trigger an incremental sort.
I will need a bit more time to check this.

In the meantime, I've updated the patches with the review comments.

Regards,
Anthonin

Attachment

Re: Possible incorrect row estimation for Gather paths

From
Richard Guo
Date:
On Mon, Jul 22, 2024 at 4:47 PM Anthonin Bonnefoy
<anthonin.bonnefoy@datadoghq.com> wrote:
> On Wed, Jul 17, 2024 at 3:59 AM Richard Guo <guofenglinux@gmail.com> wrote:
> > I can reproduce this problem with the query below.
> >
> > explain (costs on) select * from tenk1 order by twenty;
> >                                    QUERY PLAN
> > ---------------------------------------------------------------------------------
> >  Gather Merge  (cost=772.11..830.93 rows=5882 width=244)
> >    Workers Planned: 1
> >    ->  Sort  (cost=772.10..786.80 rows=5882 width=244)
> >          Sort Key: twenty
> >          ->  Parallel Seq Scan on tenk1  (cost=0.00..403.82 rows=5882 width=244)
> > (5 rows)
> I was looking for a test to add in the regress checks that wouldn't
> rely on explain cost since it is disabled. However, I've realised I
> could do something similar to what's done in stats_ext with the
> check_estimated_rows function. I've added the get_estimated_rows test
> function that extracts the estimated rows from the top node and uses
> it to check the gather nodes' estimates. get_estimated_rows uses a
> simple explain compared to check_estimated_rows which relies on an
> explain analyze.

Hmm, I'm hesitant about adding the tests that verify the number of
estimated rows in this patch.  The table 'tenk1' isn't created with
autovacuum_enabled = off, so we may have unstable results from
auto-analyze happening.  I think the plan change in join_hash.out is
sufficient to verify the changes in this patch.

> > I wonder if the changes in create_ordered_paths should also be reduced
> > to 'total_groups = gather_rows_estimate(path);'.
>
> It should already be the case with the patch v2. It does create
> rounding errors that are visible in the tests but they shouldn't
> exceed +1 or -1.
>
> > I think perhaps it's better to declare gather_rows_estimate() in
> > cost.h rather than optimizer.h.
> > (BTW, I wonder if compute_gather_rows() would be a better name?)
>
> Good point, I've moved the declaration and renamed it.
>
> > I noticed another issue in generate_useful_gather_paths() -- *rowsp
> > would have a random value if override_rows is true and we use
> > incremental sort for gather merge.  I think we should fix this too.
>
> That seems to be the case. I've tried to find a query that could
> trigger this codepath without success. All grouping and distinct paths
> I've tried where fully sorted and didn't trigger an incremental sort.
> I will need a bit more time to check this.
>
> In the meantime, I've updated the patches with the review comments.

Otherwise I think the v3 patch looks good to me.

Attached is an updated version of this patch with cosmetic changes and
comment updates.  It also squishes the two patches together into one.
I'm planning to push it soon, barring any objections or comments.

Thanks
Richard

Attachment

Re: Possible incorrect row estimation for Gather paths

From
Richard Guo
Date:
On Mon, Jul 22, 2024 at 5:55 PM Richard Guo <guofenglinux@gmail.com> wrote:
> Otherwise I think the v3 patch looks good to me.
>
> Attached is an updated version of this patch with cosmetic changes and
> comment updates.  It also squishes the two patches together into one.
> I'm planning to push it soon, barring any objections or comments.

Pushed.

Thanks
Richard