Thread: POC: Parallel processing of indexes in autovacuum

POC: Parallel processing of indexes in autovacuum

From
Maxim Orlov
Date:
Hi!

The VACUUM command can be executed with the parallel option. As documentation states, it will perform index vacuum and index cleanup phases of VACUUM in parallel using integer background workers. But such an interesting feature is not used for an autovacuum. After a quick look at the source codes, it became clear to me that when the parallel option was added, the corresponding option for autovacuum wasn't implemented, although there are no clear obstacles to this.

Actually, one of our customers step into a problem with autovacuum on a table with many indexes and relatively long transactions. Of course, long transactions are an ultimate evil and the problem can be solved by calling running vacuum and a cron task, but, I think, we can do better.

Anyhow, what about adding parallel option for an autovacuum? Here is a POC patch for proposed functionality. For the sake of simplicity's, several GUC's have been added. It would be good to think through the parallel launch condition without them.

As always, any thoughts and opinions are very welcome!

--
Best regards,
Maxim Orlov.
Attachment

Re: POC: Parallel processing of indexes in autovacuum

From
wenhui qiu
Date:
HI Maxim Orlov 
     Thank you for your working on this ,I like your idea ,but I have a suggestion ,autovacuum_max_workers is not need change requires restart , I think those guc are  can like 
autovacuum_max_workers 
+#max_parallel_index_autovac_workers = 0 # this feature disabled by default
+ # (change requires restart)
+#autovac_idx_parallel_min_rows = 0
+ # (change requires restart)
+#autovac_idx_parallel_min_indexes = 2
+ # (change requires restart)

Thanks 

On Wed, Apr 16, 2025 at 7:05 PM Maxim Orlov <orlovmg@gmail.com> wrote:
Hi!

The VACUUM command can be executed with the parallel option. As documentation states, it will perform index vacuum and index cleanup phases of VACUUM in parallel using integer background workers. But such an interesting feature is not used for an autovacuum. After a quick look at the source codes, it became clear to me that when the parallel option was added, the corresponding option for autovacuum wasn't implemented, although there are no clear obstacles to this.

Actually, one of our customers step into a problem with autovacuum on a table with many indexes and relatively long transactions. Of course, long transactions are an ultimate evil and the problem can be solved by calling running vacuum and a cron task, but, I think, we can do better.

Anyhow, what about adding parallel option for an autovacuum? Here is a POC patch for proposed functionality. For the sake of simplicity's, several GUC's have been added. It would be good to think through the parallel launch condition without them.

As always, any thoughts and opinions are very welcome!

--
Best regards,
Maxim Orlov.

Re: POC: Parallel processing of indexes in autovacuum

From
Masahiko Sawada
Date:
Hi,

On Wed, Apr 16, 2025 at 4:05 AM Maxim Orlov <orlovmg@gmail.com> wrote:
>
> Hi!
>
> The VACUUM command can be executed with the parallel option. As documentation states, it will perform index vacuum
andindex cleanup phases of VACUUM in parallel using integer background workers. But such an interesting feature is not
usedfor an autovacuum. After a quick look at the source codes, it became clear to me that when the parallel option was
added,the corresponding option for autovacuum wasn't implemented, although there are no clear obstacles to this. 
>
> Actually, one of our customers step into a problem with autovacuum on a table with many indexes and relatively long
transactions.Of course, long transactions are an ultimate evil and the problem can be solved by calling running vacuum
anda cron task, but, I think, we can do better. 
>
> Anyhow, what about adding parallel option for an autovacuum? Here is a POC patch for proposed functionality. For the
sakeof simplicity's, several GUC's have been added. It would be good to think through the parallel launch condition
withoutthem. 
>
> As always, any thoughts and opinions are very welcome!

As I understand it, we initially disabled parallel vacuum for
autovacuum because their objectives are somewhat contradictory.
Parallel vacuum aims to accelerate the process by utilizing additional
resources, while autovacuum is designed to perform cleaning operations
with minimal impact on foreground transaction processing (e.g.,
through vacuum delay).

Nevertheless, I see your point about the potential benefits of using
parallel vacuum within autovacuum in specific scenarios. The crucial
consideration is determining appropriate criteria for triggering
parallel vacuum in autovacuum. Given that we currently support only
parallel index processing, suitable candidates might be autovacuum
operations on large tables that have a substantial number of
sufficiently large indexes and a high volume of garbage tuples.

Once we have parallel heap vacuum, as discussed in thread[1], it would
also likely be beneficial to incorporate it into autovacuum during
aggressive vacuum or failsafe mode.

Although the actual number of parallel workers ultimately depends on
the number of eligible indexes, it might be beneficial to introduce a
storage parameter, say parallel_vacuum_workers, that allows control
over the number of parallel vacuum workers on a per-table basis.

Regarding implementation: I notice the WIP patch implements its own
parallel vacuum mechanism for autovacuum. Have you considered simply
setting at_params.nworkers to a value greater than zero?

Regards,

[1] https://www.postgresql.org/message-id/CAD21AoAEfCNv-GgaDheDJ%2Bs-p_Lv1H24AiJeNoPGCmZNSwL1YA%40mail.gmail.com

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



Re: POC: Parallel processing of indexes in autovacuum

From
Sami Imseih
Date:
Thanks for raising this idea!

I am generally -1 on the idea of autovacuum performing parallel
index vacuum, because I always felt that the parallel option should
be employed in a targeted manner for a specific table. if you have a bunch
of large tables, some more important than others, a/c may end
up using parallel resources on the least important tables and you
will have to adjust a/v settings per table, etc to get the right table
to be parallel index vacuumed by a/v.

Also, with the TIDStore improvements for index cleanup, and the practical
elimination of multi-pass index vacuums, I see this being even less
convincing as something to add to a/v.

Now, If I am going to allocate extra workers to run vacuum in parallel, why
not just provide more autovacuum workers instead so I can get more tables
vacuumed within a span of time?

> Once we have parallel heap vacuum, as discussed in thread[1], it would
> also likely be beneficial to incorporate it into autovacuum during
> aggressive vacuum or failsafe mode.

IIRC, index cleanup is disabled by failsafe.


--
Sami Imseih
Amazon Web Services (AWS)



Re: POC: Parallel processing of indexes in autovacuum

From
Daniil Davydov
Date:
On Thu, May 1, 2025 at 8:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> As I understand it, we initially disabled parallel vacuum for
> autovacuum because their objectives are somewhat contradictory.
> Parallel vacuum aims to accelerate the process by utilizing additional
> resources, while autovacuum is designed to perform cleaning operations
> with minimal impact on foreground transaction processing (e.g.,
> through vacuum delay).
>
Yep, we also decided that we must not create more a/v workers for
index processing.
In current implementation, the leader process sends a signal to the
a/v launcher, and the launcher tries to launch all requested workers.
But the number of workers never exceeds `autovacuum_max_workers`.
Thus, we will never have more a/v workers than in the standard case
(without this feature).

> Nevertheless, I see your point about the potential benefits of using
> parallel vacuum within autovacuum in specific scenarios. The crucial
> consideration is determining appropriate criteria for triggering
> parallel vacuum in autovacuum. Given that we currently support only
> parallel index processing, suitable candidates might be autovacuum
> operations on large tables that have a substantial number of
> sufficiently large indexes and a high volume of garbage tuples.
>
> Although the actual number of parallel workers ultimately depends on
> the number of eligible indexes, it might be beneficial to introduce a
> storage parameter, say parallel_vacuum_workers, that allows control
> over the number of parallel vacuum workers on a per-table basis.
>
For now, we have three GUC variables for this purpose:
max_parallel_index_autovac_workers, autovac_idx_parallel_min_rows,
autovac_idx_parallel_min_indexes.
That is, everything is as you said. But we are still conducting
research on this issue. I would like to get rid of some of these
parameters.

> Regarding implementation: I notice the WIP patch implements its own
> parallel vacuum mechanism for autovacuum. Have you considered simply
> setting at_params.nworkers to a value greater than zero?
>
About `at_params.nworkers = N` - that's exactly what we're doing (you
can see it in the `vacuum_rel` function). But we cannot fully reuse
code of VACUUM PARALLEL, because it creates its own processes via
dynamic bgworkers machinery.
As I said above - we don't want to consume additional resources. Also
we don't want to complicate communication between processes (the idea
is that a/v workers can only send signals to the a/v launcher).
As a result, we created our own implementation of parallel index
processing control - see changes in vacuumparallel.c and autovacuum.c.

--
Best regards,
Daniil Davydov



Re: POC: Parallel processing of indexes in autovacuum

From
Daniil Davydov
Date:
On Fri, May 2, 2025 at 11:58 PM Sami Imseih <samimseih@gmail.com> wrote:
>
> I am generally -1 on the idea of autovacuum performing parallel
> index vacuum, because I always felt that the parallel option should
> be employed in a targeted manner for a specific table. if you have a bunch
> of large tables, some more important than others, a/c may end
> up using parallel resources on the least important tables and you
> will have to adjust a/v settings per table, etc to get the right table
> to be parallel index vacuumed by a/v.

Hm, this is a good point. I think I should clarify one moment - in
practice, there is a common situation when users have one huge table
among all databases (with 80+ indexes created on it). But, of course,
in general there may be few such tables.
But we can still adjust the autovac_idx_parallel_min_rows parameter.
If a table has a lot of dead tuples => it is actively used => table is
important (?).
Also, if the user can really determine the "importance" of each of the
tables - we can provide an appropriate table option. Tables with this
option set will be processed in parallel in priority order. What do
you think about such an idea?

>
> Also, with the TIDStore improvements for index cleanup, and the practical
> elimination of multi-pass index vacuums, I see this being even less
> convincing as something to add to a/v.

If I understood correctly, then we are talking about the fact that
TIDStore can store so many tuples that in fact a second pass is never
needed.
But the number of passes does not affect the presented optimization in
any way. We must think about a large number of indexes that must be
processed. Even within a single pass we can have a 40% increase in
speed.

>
> Now, If I am going to allocate extra workers to run vacuum in parallel, why
> not just provide more autovacuum workers instead so I can get more tables
> vacuumed within a span of time?

For now, only one process can clean up indexes, so I don't see how
increasing the number of a/v workers will help in the situation that I
mentioned above.
Also, we don't consume additional resources during autovacuum in this
patch - total number of a/v workers always <= autovacuum_max_workers.

BTW, see v2 patch, attached to this letter (bug fixes) :-)

--
Best regards,
Daniil Davydov

Attachment

Re: POC: Parallel processing of indexes in autovacuum

From
Sami Imseih
Date:
> On Fri, May 2, 2025 at 11:58 PM Sami Imseih <samimseih@gmail.com> wrote:
> >
> > I am generally -1 on the idea of autovacuum performing parallel
> > index vacuum, because I always felt that the parallel option should
> > be employed in a targeted manner for a specific table. if you have a bunch
> > of large tables, some more important than others, a/c may end
> > up using parallel resources on the least important tables and you
> > will have to adjust a/v settings per table, etc to get the right table
> > to be parallel index vacuumed by a/v.
>
> Hm, this is a good point. I think I should clarify one moment - in
> practice, there is a common situation when users have one huge table
> among all databases (with 80+ indexes created on it). But, of course,
> in general there may be few such tables.
> But we can still adjust the autovac_idx_parallel_min_rows parameter.
> If a table has a lot of dead tuples => it is actively used => table is
> important (?).
> Also, if the user can really determine the "importance" of each of the
> tables - we can provide an appropriate table option. Tables with this
> option set will be processed in parallel in priority order. What do
> you think about such an idea?

I think in most cases, the user will want to determine the priority of
a table getting parallel vacuum cycles rather than having the autovacuum
determine the priority. I also see users wanting to stagger
vacuums of large tables with many indexes through some time period,
and give the
tables the full amount of parallel workers they can afford at these
specific periods
of time. A/V currently does not really allow for this type of
scheduling, and if we
give some kind of GUC to prioritize tables, I think users will constantly have
to be modifying this priority.

I am basing my comments on the scenarios I have seen on the field, and others
may have a different opinion.

> > Also, with the TIDStore improvements for index cleanup, and the practical
> > elimination of multi-pass index vacuums, I see this being even less
> > convincing as something to add to a/v.
>
> If I understood correctly, then we are talking about the fact that
> TIDStore can store so many tuples that in fact a second pass is never
> needed.
> But the number of passes does not affect the presented optimization in
> any way. We must think about a large number of indexes that must be
> processed. Even within a single pass we can have a 40% increase in
> speed.

I am not discounting that a single table vacuum with many indexes will
maybe perform better with parallel index scan, I am merely saying that
the TIDStore optimization now makes index vacuums better and perhaps
there is less of an incentive to use parallel.

> > Now, If I am going to allocate extra workers to run vacuum in parallel, why
> > not just provide more autovacuum workers instead so I can get more tables
> > vacuumed within a span of time?
>
> For now, only one process can clean up indexes, so I don't see how
> increasing the number of a/v workers will help in the situation that I
> mentioned above.
> Also, we don't consume additional resources during autovacuum in this
> patch - total number of a/v workers always <= autovacuum_max_workers.

Increasing a/v workers will not help speed up a specific table, what I
am suggesting is that instead of speeding up one table, let's just allow
other tables to not be starved of a/v cycles due to lack of a/v workers.

--
Sami



Re: POC: Parallel processing of indexes in autovacuum

From
Masahiko Sawada
Date:
On Fri, May 2, 2025 at 9:58 AM Sami Imseih <samimseih@gmail.com> wrote:
>
> > Once we have parallel heap vacuum, as discussed in thread[1], it would
> > also likely be beneficial to incorporate it into autovacuum during
> > aggressive vacuum or failsafe mode.
>
> IIRC, index cleanup is disabled by failsafe.

Yes. My idea is to use parallel *heap* vacuum in autovacuum during
failsafe mode. I think it would make sense as users want to complete
freezing tables as soon as possible in this situation.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



Re: POC: Parallel processing of indexes in autovacuum

From
Masahiko Sawada
Date:
On Fri, May 2, 2025 at 11:13 AM Daniil Davydov <3danissimo@gmail.com> wrote:
>
> On Thu, May 1, 2025 at 8:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > As I understand it, we initially disabled parallel vacuum for
> > autovacuum because their objectives are somewhat contradictory.
> > Parallel vacuum aims to accelerate the process by utilizing additional
> > resources, while autovacuum is designed to perform cleaning operations
> > with minimal impact on foreground transaction processing (e.g.,
> > through vacuum delay).
> >
> Yep, we also decided that we must not create more a/v workers for
> index processing.
> In current implementation, the leader process sends a signal to the
> a/v launcher, and the launcher tries to launch all requested workers.
> But the number of workers never exceeds `autovacuum_max_workers`.
> Thus, we will never have more a/v workers than in the standard case
> (without this feature).

I have concerns about this design. When autovacuuming on a single
table consumes all available autovacuum_max_workers slots with
parallel vacuum workers, the system becomes incapable of processing
other tables. This means that when determining the appropriate
autovacuum_max_workers value, users must consider not only the number
of tables to be processed concurrently but also the potential number
of parallel workers that might be launched. I think it would more make
sense to maintain the existing autovacuum_max_workers parameter while
introducing a new parameter that would either control the maximum
number of parallel vacuum workers per autovacuum worker or set a
system-wide cap on the total number of parallel vacuum workers.

>
> > Regarding implementation: I notice the WIP patch implements its own
> > parallel vacuum mechanism for autovacuum. Have you considered simply
> > setting at_params.nworkers to a value greater than zero?
> >
> About `at_params.nworkers = N` - that's exactly what we're doing (you
> can see it in the `vacuum_rel` function). But we cannot fully reuse
> code of VACUUM PARALLEL, because it creates its own processes via
> dynamic bgworkers machinery.
> As I said above - we don't want to consume additional resources. Also
> we don't want to complicate communication between processes (the idea
> is that a/v workers can only send signals to the a/v launcher).

Could you elaborate on the reasons why you don't want to use
background workers and avoid complicated communication between
processes? I'm not sure whether these concerns provide sufficient
justification for implementing its own parallel index processing.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



Re: POC: Parallel processing of indexes in autovacuum

From
Sami Imseih
Date:
> I think it would more make
> sense to maintain the existing autovacuum_max_workers parameter while
> introducing a new parameter that would either control the maximum
> number of parallel vacuum workers per autovacuum worker or set a
> system-wide cap on the total number of parallel vacuum workers.

+1, and would it make sense for parallel workers to come from
max_parallel_maintenance_workers? This is capped by
max_parallel_workers and max_worker_processes, so increasing
the defaults for all 3 will be needed as well.


--
Sami



Re: POC: Parallel processing of indexes in autovacuum

From
Daniil Davydov
Date:
On Sat, May 3, 2025 at 3:17 AM Sami Imseih <samimseih@gmail.com> wrote:
>
> I think in most cases, the user will want to determine the priority of
> a table getting parallel vacuum cycles rather than having the autovacuum
> determine the priority. I also see users wanting to stagger
> vacuums of large tables with many indexes through some time period,
> and give the
> tables the full amount of parallel workers they can afford at these
> specific periods
> of time. A/V currently does not really allow for this type of
> scheduling, and if we
> give some kind of GUC to prioritize tables, I think users will constantly have
> to be modifying this priority.

If the user wants to determine priority himself, we anyway need to
introduce some parameter (GUC or table option) that will give us a
hint how we should schedule a/v work.
You think that we should think about a more comprehensive behavior for
such a parameter (so that the user doesn't have to change it often)? I
will be glad to know your thoughts.

> > If I understood correctly, then we are talking about the fact that
> > TIDStore can store so many tuples that in fact a second pass is never
> > needed.
> > But the number of passes does not affect the presented optimization in
> > any way. We must think about a large number of indexes that must be
> > processed. Even within a single pass we can have a 40% increase in
> > speed.
>
> I am not discounting that a single table vacuum with many indexes will
> maybe perform better with parallel index scan, I am merely saying that
> the TIDStore optimization now makes index vacuums better and perhaps
> there is less of an incentive to use parallel.

I still insist that this does not affect the parallel index vacuum,
because we don't get an advantage in repeated passes. We get the same
speed increase whether we have this optimization or not.
Although it's even possible that the opposite is true - the situation
will be better with the new TIDStore, but I can't say for sure.

> > > Now, If I am going to allocate extra workers to run vacuum in parallel, why
> > > not just provide more autovacuum workers instead so I can get more tables
> > > vacuumed within a span of time?
> >
> > For now, only one process can clean up indexes, so I don't see how
> > increasing the number of a/v workers will help in the situation that I
> > mentioned above.
> > Also, we don't consume additional resources during autovacuum in this
> > patch - total number of a/v workers always <= autovacuum_max_workers.
>
> Increasing a/v workers will not help speed up a specific table, what I
> am suggesting is that instead of speeding up one table, let's just allow
> other tables to not be starved of a/v cycles due to lack of a/v workers.

OK, I got it. But what if vacuuming of a single table will take (for
example) 60% of all time? This is still a possible situation, and the
fast vacuum of all other tables will not help us.

--
Best regards,
Daniil Davydov



Re: POC: Parallel processing of indexes in autovacuum

From
Daniil Davydov
Date:
On Sat, May 3, 2025 at 5:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> > In current implementation, the leader process sends a signal to the
> > a/v launcher, and the launcher tries to launch all requested workers.
> > But the number of workers never exceeds `autovacuum_max_workers`.
> > Thus, we will never have more a/v workers than in the standard case
> > (without this feature).
>
> I have concerns about this design. When autovacuuming on a single
> table consumes all available autovacuum_max_workers slots with
> parallel vacuum workers, the system becomes incapable of processing
> other tables. This means that when determining the appropriate
> autovacuum_max_workers value, users must consider not only the number
> of tables to be processed concurrently but also the potential number
> of parallel workers that might be launched. I think it would more make
> sense to maintain the existing autovacuum_max_workers parameter while
> introducing a new parameter that would either control the maximum
> number of parallel vacuum workers per autovacuum worker or set a
> system-wide cap on the total number of parallel vacuum workers.
>

For now we have max_parallel_index_autovac_workers - this GUC limits
the number of parallel a/v workers that can process a single table. I
agree that the scenario you provided is problematic.
The proposal to limit the total number of supportive a/v workers seems
attractive to me (I'll implement it as an experiment).

It seems to me that this question is becoming a key one. First we need
to determine the role of the user in the whole scheduling mechanism.
Should we allow users to determine priority? Will this priority affect
only within a single vacuuming cycle, or it will be more 'global'?
I guess I don't have enough expertise to determine this alone. I will
be glad to receive any suggestions.

> > About `at_params.nworkers = N` - that's exactly what we're doing (you
> > can see it in the `vacuum_rel` function). But we cannot fully reuse
> > code of VACUUM PARALLEL, because it creates its own processes via
> > dynamic bgworkers machinery.
> > As I said above - we don't want to consume additional resources. Also
> > we don't want to complicate communication between processes (the idea
> > is that a/v workers can only send signals to the a/v launcher).
>
> Could you elaborate on the reasons why you don't want to use
> background workers and avoid complicated communication between
> processes? I'm not sure whether these concerns provide sufficient
> justification for implementing its own parallel index processing.
>

Here are my thoughts on this. A/v worker has a very simple role - it
is born after the launcher's request and must do exactly one 'task' -
vacuum table or participate in parallel index vacuum.
We also have a dedicated 'launcher' role, meaning the whole design
implies that only the launcher is able to launch processes.
If we allow a/v worker to use bgworkers, then :
1) A/v worker will go far beyond his responsibility.
2) Its functionality will overlap with the functionality of the launcher.
3) Resource consumption can jump dramatically, which is unexpected for
the user. Autovacuum will also be dependent on other resources
(bgworkers pool). The current design does not imply this.

I wanted to create a patch that would fit into the existing mechanism
without drastic innovations. But if you think that the above is not so
important, then we can reuse VACUUM PARALLEL code and it would
simplify the final implementation)

--
Best regards,
Daniil Davydov



Re: POC: Parallel processing of indexes in autovacuum

From
Daniil Davydov
Date:
On Sat, May 3, 2025 at 5:59 AM Sami Imseih <samimseih@gmail.com> wrote:
>
> > I think it would more make
> > sense to maintain the existing autovacuum_max_workers parameter while
> > introducing a new parameter that would either control the maximum
> > number of parallel vacuum workers per autovacuum worker or set a
> > system-wide cap on the total number of parallel vacuum workers.
>
> +1, and would it make sense for parallel workers to come from
> max_parallel_maintenance_workers? This is capped by
> max_parallel_workers and max_worker_processes, so increasing
> the defaults for all 3 will be needed as well.

I may be wrong, but the `max_parallel_maintenance_workers` parameter
is only used for commands that are explicitly run by the user. We
already have `autovacuum_max_workers` and I think that code will be
more consistent, if we adapt this particular parameter (perhaps with
the addition of a new one, as I wrote in the previous letter).

--
Best regards,
Daniil Davydov



Re: POC: Parallel processing of indexes in autovacuum

From
Masahiko Sawada
Date:
On Sat, May 3, 2025 at 1:10 AM Daniil Davydov <3danissimo@gmail.com> wrote:
>
> On Sat, May 3, 2025 at 5:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > > In current implementation, the leader process sends a signal to the
> > > a/v launcher, and the launcher tries to launch all requested workers.
> > > But the number of workers never exceeds `autovacuum_max_workers`.
> > > Thus, we will never have more a/v workers than in the standard case
> > > (without this feature).
> >
> > I have concerns about this design. When autovacuuming on a single
> > table consumes all available autovacuum_max_workers slots with
> > parallel vacuum workers, the system becomes incapable of processing
> > other tables. This means that when determining the appropriate
> > autovacuum_max_workers value, users must consider not only the number
> > of tables to be processed concurrently but also the potential number
> > of parallel workers that might be launched. I think it would more make
> > sense to maintain the existing autovacuum_max_workers parameter while
> > introducing a new parameter that would either control the maximum
> > number of parallel vacuum workers per autovacuum worker or set a
> > system-wide cap on the total number of parallel vacuum workers.
> >
>
> For now we have max_parallel_index_autovac_workers - this GUC limits
> the number of parallel a/v workers that can process a single table. I
> agree that the scenario you provided is problematic.
> The proposal to limit the total number of supportive a/v workers seems
> attractive to me (I'll implement it as an experiment).
>
> It seems to me that this question is becoming a key one. First we need
> to determine the role of the user in the whole scheduling mechanism.
> Should we allow users to determine priority? Will this priority affect
> only within a single vacuuming cycle, or it will be more 'global'?
> I guess I don't have enough expertise to determine this alone. I will
> be glad to receive any suggestions.

What I roughly imagined is that we don't need to change the entire
autovacuum scheduling, but would like autovacuum workers to decides
whether or not to use parallel vacuum during its vacuum operation
based on GUC parameters (having a global effect) or storage parameters
(having an effect on the particular table). The criteria of triggering
parallel vacuum in autovacuum might need to be somewhat pessimistic so
that we don't unnecessarily use parallel vacuum on many tables.

>
> > > About `at_params.nworkers = N` - that's exactly what we're doing (you
> > > can see it in the `vacuum_rel` function). But we cannot fully reuse
> > > code of VACUUM PARALLEL, because it creates its own processes via
> > > dynamic bgworkers machinery.
> > > As I said above - we don't want to consume additional resources. Also
> > > we don't want to complicate communication between processes (the idea
> > > is that a/v workers can only send signals to the a/v launcher).
> >
> > Could you elaborate on the reasons why you don't want to use
> > background workers and avoid complicated communication between
> > processes? I'm not sure whether these concerns provide sufficient
> > justification for implementing its own parallel index processing.
> >
>
> Here are my thoughts on this. A/v worker has a very simple role - it
> is born after the launcher's request and must do exactly one 'task' -
> vacuum table or participate in parallel index vacuum.
> We also have a dedicated 'launcher' role, meaning the whole design
> implies that only the launcher is able to launch processes.
>
> If we allow a/v worker to use bgworkers, then :
> 1) A/v worker will go far beyond his responsibility.
> 2) Its functionality will overlap with the functionality of the launcher.

While I agree that the launcher process is responsible for launching
autovacuum worker processes but I'm not sure it should be for
launching everything related autovacuums. It's quite possible that we
have parallel heap vacuum and processing the particular index with
parallel workers in the future. The code could get more complex if we
have the autovacuum launcher process launch such parallel workers too.
I believe it's more straightforward to divide the responsibility like
in a way that the autovacuum launcher is responsible for launching
autovacuum workers and autovacuum workers are responsible for
vacuuming tables no matter how to do that.

> 3) Resource consumption can jump dramatically, which is unexpected for
> the user.

What extra resources could be used if we use background workers
instead of autovacuum workers?

> Autovacuum will also be dependent on other resources
> (bgworkers pool). The current design does not imply this.

I see your point but I think it doesn't necessarily need to reflect it
at the infrastructure layer. For example, we can internally allocate
extra background worker slots for parallel vacuum workers based on
max_parallel_index_autovac_workers in addition to
max_worker_processes. Anyway we might need something to check or
validate max_worker_processes value to make sure that every autovacuum
worker can use the specified number of parallel workers for parallel
vacuum.

> I wanted to create a patch that would fit into the existing mechanism
> without drastic innovations. But if you think that the above is not so
> important, then we can reuse VACUUM PARALLEL code and it would
> simplify the final implementation)

I'd suggest using the existing infrastructure if we can achieve the
goal with it. If we find out there are some technical difficulties to
implement it without new infrastructure, we can revisit this approach.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



Re: POC: Parallel processing of indexes in autovacuum

From
Sami Imseih
Date:

On Sat, May 3, 2025 at 1:10 AM Daniil Davydov <3danissimo@gmail.com> wrote:
>
> On Sat, May 3, 2025 at 5:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > > In current implementation, the leader process sends a signal to the
> > > a/v launcher, and the launcher tries to launch all requested workers.
> > > But the number of workers never exceeds `autovacuum_max_workers`.
> > > Thus, we will never have more a/v workers than in the standard case
> > > (without this feature).
> >
> > I have concerns about this design. When autovacuuming on a single
> > table consumes all available autovacuum_max_workers slots with
> > parallel vacuum workers, the system becomes incapable of processing
> > other tables. This means that when determining the appropriate
> > autovacuum_max_workers value, users must consider not only the number
> > of tables to be processed concurrently but also the potential number
> > of parallel workers that might be launched. I think it would more make
> > sense to maintain the existing autovacuum_max_workers parameter while
> > introducing a new parameter that would either control the maximum
> > number of parallel vacuum workers per autovacuum worker or set a
> > system-wide cap on the total number of parallel vacuum workers.
> >
>
> For now we have max_parallel_index_autovac_workers - this GUC limits
> the number of parallel a/v workers that can process a single table. I
> agree that the scenario you provided is problematic.
> The proposal to limit the total number of supportive a/v workers seems
> attractive to me (I'll implement it as an experiment).
>
> It seems to me that this question is becoming a key one. First we need
> to determine the role of the user in the whole scheduling mechanism.
> Should we allow users to determine priority? Will this priority affect
> only within a single vacuuming cycle, or it will be more 'global'?
> I guess I don't have enough expertise to determine this alone. I will
> be glad to receive any suggestions.

What I roughly imagined is that we don't need to change the entire
autovacuum scheduling, but would like autovacuum workers to decides
whether or not to use parallel vacuum during its vacuum operation
based on GUC parameters (having a global effect) or storage parameters
(having an effect on the particular table). The criteria of triggering
parallel vacuum in autovacuum might need to be somewhat pessimistic so
that we don't unnecessarily use parallel vacuum on many tables.

Perhaps we should only provide a reloption, therefore only tables specified 
by the user via the reloption can be autovacuumed  in parallel? 

This gives a targeted approach. Of course if multiple of these allowed tables 
are to be autovacuumed at the same time, some may not get all the workers,
But that’s not different from if you are to manually vacuum in parallel the tables 
at the same time. 

What do you think ? 

Sami 

Re: POC: Parallel processing of indexes in autovacuum

From
Daniil Davydov
Date:
On Tue, May 6, 2025 at 6:57 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> What I roughly imagined is that we don't need to change the entire
> autovacuum scheduling, but would like autovacuum workers to decides
> whether or not to use parallel vacuum during its vacuum operation
> based on GUC parameters (having a global effect) or storage parameters
> (having an effect on the particular table). The criteria of triggering
> parallel vacuum in autovacuum might need to be somewhat pessimistic so
> that we don't unnecessarily use parallel vacuum on many tables.
>

+1, I think about it in the same way. I will expand on this topic in
more detail in response to Sami's letter [1], so as not to repeat
myself.

> > Here are my thoughts on this. A/v worker has a very simple role - it
> > is born after the launcher's request and must do exactly one 'task' -
> > vacuum table or participate in parallel index vacuum.
> > We also have a dedicated 'launcher' role, meaning the whole design
> > implies that only the launcher is able to launch processes.
> >
> > If we allow a/v worker to use bgworkers, then :
> > 1) A/v worker will go far beyond his responsibility.
> > 2) Its functionality will overlap with the functionality of the launcher.
>
> While I agree that the launcher process is responsible for launching
> autovacuum worker processes but I'm not sure it should be for
> launching everything related autovacuums. It's quite possible that we
> have parallel heap vacuum and processing the particular index with
> parallel workers in the future. The code could get more complex if we
> have the autovacuum launcher process launch such parallel workers too.
> I believe it's more straightforward to divide the responsibility like
> in a way that the autovacuum launcher is responsible for launching
> autovacuum workers and autovacuum workers are responsible for
> vacuuming tables no matter how to do that.

It sounds very tempting. At the very beginning I did exactly that (to
make sure that nothing would break in a parallel autovacuum). Only
later it was decided to abandon the use of bgworkers.
For now both approaches look fair for me. What do you think - will
others agree that we can provide more responsibility to a/v workers?

> > 3) Resource consumption can jump dramatically, which is unexpected for
> > the user.
>
> What extra resources could be used if we use background workers
> instead of autovacuum workers?

I meant that more processes are starting to participate in the
autovacuum than indicated in autovacuum_max_workers. And if a/v worker
will use additional bgworkers => other operations cannot get these
resources.

> > Autovacuum will also be dependent on other resources
> > (bgworkers pool). The current design does not imply this.
>
> I see your point but I think it doesn't necessarily need to reflect it
> at the infrastructure layer. For example, we can internally allocate
> extra background worker slots for parallel vacuum workers based on
> max_parallel_index_autovac_workers in addition to
> max_worker_processes. Anyway we might need something to check or
> validate max_worker_processes value to make sure that every autovacuum
> worker can use the specified number of parallel workers for parallel
> vacuum.

I don't think that we can provide all supportive workers for each
parallel index vacuuming request. But I got your point - always keep
several bgworkers that only a/v workers can use if needed and the size
of this additional pool (depending on max_worker_processes) must be
user-configurable.

> > I wanted to create a patch that would fit into the existing mechanism
> > without drastic innovations. But if you think that the above is not so
> > important, then we can reuse VACUUM PARALLEL code and it would
> > simplify the final implementation)
>
> I'd suggest using the existing infrastructure if we can achieve the
> goal with it. If we find out there are some technical difficulties to
> implement it without new infrastructure, we can revisit this approach.

OK, in the near future I'll implement it and send a new patch to this
thread. I'll be glad if you will take a look on it)

[1] https://www.postgresql.org/message-id/CAA5RZ0vfBg%3Dc_0Sa1Tpxv8tueeBk8C5qTf9TrxKBbXUqPc99Ag%40mail.gmail.com

--
Best regards,
Daniil Davydov



Re: POC: Parallel processing of indexes in autovacuum

From
Masahiko Sawada
Date:
On Mon, May 5, 2025 at 5:21 PM Sami Imseih <samimseih@gmail.com> wrote:
>
>
>> On Sat, May 3, 2025 at 1:10 AM Daniil Davydov <3danissimo@gmail.com> wrote:
>> >
>> > On Sat, May 3, 2025 at 5:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>> > >
>> > > > In current implementation, the leader process sends a signal to the
>> > > > a/v launcher, and the launcher tries to launch all requested workers.
>> > > > But the number of workers never exceeds `autovacuum_max_workers`.
>> > > > Thus, we will never have more a/v workers than in the standard case
>> > > > (without this feature).
>> > >
>> > > I have concerns about this design. When autovacuuming on a single
>> > > table consumes all available autovacuum_max_workers slots with
>> > > parallel vacuum workers, the system becomes incapable of processing
>> > > other tables. This means that when determining the appropriate
>> > > autovacuum_max_workers value, users must consider not only the number
>> > > of tables to be processed concurrently but also the potential number
>> > > of parallel workers that might be launched. I think it would more make
>> > > sense to maintain the existing autovacuum_max_workers parameter while
>> > > introducing a new parameter that would either control the maximum
>> > > number of parallel vacuum workers per autovacuum worker or set a
>> > > system-wide cap on the total number of parallel vacuum workers.
>> > >
>> >
>> > For now we have max_parallel_index_autovac_workers - this GUC limits
>> > the number of parallel a/v workers that can process a single table. I
>> > agree that the scenario you provided is problematic.
>> > The proposal to limit the total number of supportive a/v workers seems
>> > attractive to me (I'll implement it as an experiment).
>> >
>> > It seems to me that this question is becoming a key one. First we need
>> > to determine the role of the user in the whole scheduling mechanism.
>> > Should we allow users to determine priority? Will this priority affect
>> > only within a single vacuuming cycle, or it will be more 'global'?
>> > I guess I don't have enough expertise to determine this alone. I will
>> > be glad to receive any suggestions.
>>
>> What I roughly imagined is that we don't need to change the entire
>> autovacuum scheduling, but would like autovacuum workers to decides
>> whether or not to use parallel vacuum during its vacuum operation
>> based on GUC parameters (having a global effect) or storage parameters
>> (having an effect on the particular table). The criteria of triggering
>> parallel vacuum in autovacuum might need to be somewhat pessimistic so
>> that we don't unnecessarily use parallel vacuum on many tables.
>
>
> Perhaps we should only provide a reloption, therefore only tables specified
> by the user via the reloption can be autovacuumed  in parallel?
>
> This gives a targeted approach. Of course if multiple of these allowed tables
> are to be autovacuumed at the same time, some may not get all the workers,
> But that’s not different from if you are to manually vacuum in parallel the tables
> at the same time.
>
> What do you think ?

+1. I think that's a good starting point. We can later introduce a new
GUC parameter that globally controls the maximum number of parallel
vacuum workers used in autovacuum, if necessary.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



Re: POC: Parallel processing of indexes in autovacuum

From
Daniil Davydov
Date:
On Tue, May 6, 2025 at 7:21 AM Sami Imseih <samimseih@gmail.com> wrote:
>
> Perhaps we should only provide a reloption, therefore only tables specified
> by the user via the reloption can be autovacuumed  in parallel?

Аfter your comments (earlier in this thread) I decided to do just
that. For now we have reloption, so the user can decide which tables
are "important" for parallel index vacuuming.
We also set lower bounds (hardcoded) on the number of indexes and the
number of dead tuples. For example, there is no need to use a parallel
vacuum if the table has only one index.
The situation is more complicated with the number of dead tuples - we
need tests that would show the optimal minimum value. This issue is
still being worked out.

> This gives a targeted approach. Of course if multiple of these allowed tables
> are to be autovacuumed at the same time, some may not get all the workers,
> But that’s not different from if you are to manually vacuum in parallel the tables
> at the same time.

I fully agree. Recently v2 patch has been supplemented with a new
feature [1] - multiple tables in a cluster can be processed in
parallel during autovacuum. And of course, not every a/v worker can
get enough supportive processes, but this is considered normal
behavior.
Maximum number of supportive workers is limited by the GUC variable.

[1] I guess that I'll send it within the v3 patch, that will also
contain logic that was discussed in the letter above - using bgworkers
instead of additional a/v workers. BTW, what do you think about this
idea?

--
Best regards,
Daniil Davydov



Re: POC: Parallel processing of indexes in autovacuum

From
Sami Imseih
Date:
> On Mon, May 5, 2025 at 5:21 PM Sami Imseih <samimseih@gmail.com> wrote:
> >
> >
> >> On Sat, May 3, 2025 at 1:10 AM Daniil Davydov <3danissimo@gmail.com> wrote:
> >> >
> >> > On Sat, May 3, 2025 at 5:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >> > >
> >> > > > In current implementation, the leader process sends a signal to the
> >> > > > a/v launcher, and the launcher tries to launch all requested workers.
> >> > > > But the number of workers never exceeds `autovacuum_max_workers`.
> >> > > > Thus, we will never have more a/v workers than in the standard case
> >> > > > (without this feature).
> >> > >
> >> > > I have concerns about this design. When autovacuuming on a single
> >> > > table consumes all available autovacuum_max_workers slots with
> >> > > parallel vacuum workers, the system becomes incapable of processing
> >> > > other tables. This means that when determining the appropriate
> >> > > autovacuum_max_workers value, users must consider not only the number
> >> > > of tables to be processed concurrently but also the potential number
> >> > > of parallel workers that might be launched. I think it would more make
> >> > > sense to maintain the existing autovacuum_max_workers parameter while
> >> > > introducing a new parameter that would either control the maximum
> >> > > number of parallel vacuum workers per autovacuum worker or set a
> >> > > system-wide cap on the total number of parallel vacuum workers.
> >> > >
> >> >
> >> > For now we have max_parallel_index_autovac_workers - this GUC limits
> >> > the number of parallel a/v workers that can process a single table. I
> >> > agree that the scenario you provided is problematic.
> >> > The proposal to limit the total number of supportive a/v workers seems
> >> > attractive to me (I'll implement it as an experiment).
> >> >
> >> > It seems to me that this question is becoming a key one. First we need
> >> > to determine the role of the user in the whole scheduling mechanism.
> >> > Should we allow users to determine priority? Will this priority affect
> >> > only within a single vacuuming cycle, or it will be more 'global'?
> >> > I guess I don't have enough expertise to determine this alone. I will
> >> > be glad to receive any suggestions.
> >>
> >> What I roughly imagined is that we don't need to change the entire
> >> autovacuum scheduling, but would like autovacuum workers to decides
> >> whether or not to use parallel vacuum during its vacuum operation
> >> based on GUC parameters (having a global effect) or storage parameters
> >> (having an effect on the particular table). The criteria of triggering
> >> parallel vacuum in autovacuum might need to be somewhat pessimistic so
> >> that we don't unnecessarily use parallel vacuum on many tables.
> >
> >
> > Perhaps we should only provide a reloption, therefore only tables specified
> > by the user via the reloption can be autovacuumed  in parallel?
> >
> > This gives a targeted approach. Of course if multiple of these allowed tables
> > are to be autovacuumed at the same time, some may not get all the workers,
> > But that’s not different from if you are to manually vacuum in parallel the tables
> > at the same time.
> >
> > What do you think ?
>
> +1. I think that's a good starting point. We can later introduce a new
> GUC parameter that globally controls the maximum number of parallel
> vacuum workers used in autovacuum, if necessary.

and I this reloption should also apply to parallel heap vacuum in
non-failsafe scenarios.
In the failsafe case however, all tables will be eligible for parallel
vacuum. Anyhow, that
discussion could be taken in that thread, but wanted to point that out.

--
Sami Imseih
Amazon Web Services (AWS)