Re: [HACKERS] More stats about skipped vacuums - Mailing list pgsql-hackers
| From | Kyotaro HORIGUCHI |
|---|---|
| Subject | Re: [HACKERS] More stats about skipped vacuums |
| Date | |
| Msg-id | 20171030.205750.246076862.horiguchi.kyotaro@lab.ntt.co.jp Whole thread Raw |
| In response to | Re: [HACKERS] More stats about skipped vacuums (Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>) |
| Responses |
Re: [HACKERS] More stats about skipped vacuums
|
| List | pgsql-hackers |
At Thu, 26 Oct 2017 15:06:30 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in
<20171026.150630.115694437.horiguchi.kyotaro@lab.ntt.co.jp>
> At Fri, 20 Oct 2017 19:15:16 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in
<CAD21AoAkaw-u0feAVN_VrKZA5tvzp7jT=mQCQP-SvMegKXHHaw@mail.gmail.com>
> > > n_mod_since_analyze | 20000
> > > + vacuum_requred | true
> > > + last_vacuum_oldest_xid | 8023
> > > + last_vacuum_left_to_truncate | 5123
> > > + last_vacuum_truncated | 387
> > > last_vacuum | 2017-10-10 17:21:54.380805+09
> > > last_autovacuum | 2017-10-10 17:21:54.380805+09
> > > + last_autovacuum_status | Killed by lock conflict
> > > ...
> > > autovacuum_count | 128
> > > + incomplete_autovacuum_count | 53
> > >
> > > # The last one might be needless..
> >
> > I'm not sure that the above informations will help for users or DBA
> > but personally I sometimes want to have the number of index scans of
> > the last autovacuum in the pg_stat_user_tables view. That value
> > indicates how efficiently vacuums performed and would be a signal to
> > increase the setting of autovacuum_work_mem for user.
>
> Btree and all existing index AMs (except brin) seem to visit the
> all pages in every index scan so it would be valuable. Instead
> the number of visited index pages during a table scan might be
> usable. It is more relevant to performance than the number of
> scans, on the other hand it is a bit difficult to get something
> worth from the number in a moment. I'll show the number of scans
> in the first cut.
>
> > > Where the "Killed by lock conflict" is one of the followings.
> > >
> > > - Completed
> > > - Truncation skipped
> > > - Partially truncated
> > > - Skipped
> > > - Killed by lock conflict
> > >
> > > This seems enough to find the cause of a table bloat. The same
> > > discussion could be applied to analyze but it might be the
> > > another issue.
> > >
> > > There may be a better way to indicate the vacuum soundness. Any
> > > opinions and suggestions are welcome.
> > >
> > > I'm going to make a patch to do the 'formal' one for the time
> > > being.
Done with small modifications. In the attached patch
pg_stat_all_tables has the following new columns. Documentations
is not provided at this stage.
----- n_mod_since_analyze | 0
+ vacuum_required | not requried last_vacuum | last_autovacuum | 2017-10-30
18:51:32.060551+09last_analyze | last_autoanalyze | 2017-10-30 18:48:33.414711+09 vacuum_count
| 0
+ last_vacuum_truncated | 0
+ last_vacuum_untruncated | 0
+ last_vacuum_index_scans | 0
+ last_vacuum_oldest_xmin | 2134
+ last_vacuum_status | agressive vacuum completed
+ autovacuum_fail_count | 0 autovacuum_count | 5 analyze_count | 0 autoanalyze_count | 1
-----
Where each column shows the following infomation.
+ vacuum_required | not requried
VACUUM requirement status. Takes the following values.
- partial Partial (or normal) will be performed by the next autovacuum. The word "partial" is taken from the
commentfor vacuum_set_xid_limits.
- aggressive Aggressive scan will be performed by the next autovacuum.
- required Any type of autovacuum will be performed. The type of scan is unknown because the view failed to take
therequired lock on the table. (AutoVacuumrequirement())
- not required Next autovacuum won't perform scan on this relation.
- not required (lock not acquired)
Autovacuum should be disabled and the distance to freeze-limit is not known because required lock is not
available.
- close to freeze-limit xid Shown while autovacuum is disabled. The table is in the manual vacuum window to avoid
anti-wraparoundautovacuum.
+ last_vacuum_truncated | 0
The number of truncated pages in the last completed (auto)vacuum.
+ last_vacuum_untruncated | 0 The number of pages the last completed (auto)vacuum tried to truncate but could not for
somereason.
+ last_vacuum_index_scans | 0 The number of index scans performed in the last completed (auto)vacuum.
+ last_vacuum_oldest_xmin | 2134 The oldest xmin used in the last completed (auto)vacuum.
+ last_vacuum_status | agressive vacuum completed
The finish status of the last vacuum. Takes the following values. (pg_stat_get_last_vacuum_status())
- completed The last partial (auto)vacuum is completed.
- vacuum full completed The last VACUUM FULL is completed.
- aggressive vacuum completed The last aggressive (auto)vacuum is completed.
- error while $progress The last vacuum stopped by error while $progress. The $progress one of the vacuum
progressphases.
- canceled while $progress The last vacuum was canceled while $progress
This is caused by user cancellation of manual vacuum or killed by another backend who wants to acquire lock on
the relation.
- skipped - lock unavailable The last autovacuum on the relation was skipped because required lock was not
available.
- unvacuumable A past autovacuum tried vacuum on the relation but it is not vacuumable for reasons of ownership
oraccessibility problem. (Such relations are not shown in pg_stat_all_tables..)
+ autovacuum_fail_count | 0 The number of successive failure of vacuum on the relation. Reset to zero by completed
vacuum.
======
In the patch, vacrelstats if pointed from a static variable and
cancel reporting is performed in PG_CATCH() section in vacuum().
Every unthrown error like lock acquisition failure is reported by
explicit pgstat_report_vacuum() with the corresponding finish
code.
Vacuum requirement status is calculated in AutoVacuumRequirment()
and returned as a string. Access share lock on the target
relation is required but it returns only available values if the
lock is not available. I decided to return incomplete (coarse
grained) result than wait for a lock that isn't known to be
relased in a short time for a perfect result.
regards,
--
Kyotaro Horiguchi
NTT Open Source Software Center
From 336748b61559bee66328a241394b365ebaacba6a Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
Date: Fri, 27 Oct 2017 17:36:12 +0900
Subject: [PATCH] Add several vacuum information in pg_stat_*_tables.
---src/backend/catalog/system_views.sql | 7 ++src/backend/commands/cluster.c | 2
+-src/backend/commands/vacuum.c | 105 ++++++++++++++++++++++--src/backend/commands/vacuumlazy.c | 103
+++++++++++++++++++++---src/backend/postmaster/autovacuum.c | 115
++++++++++++++++++++++++++src/backend/postmaster/pgstat.c | 80
+++++++++++++++---src/backend/utils/adt/pgstatfuncs.c | 152
++++++++++++++++++++++++++++++++++-src/include/catalog/pg_proc.h | 14 ++++src/include/commands/vacuum.h |
4 +-src/include/pgstat.h | 38 ++++++++-src/include/postmaster/autovacuum.h | 1
+src/test/regress/expected/rules.out | 21 +++++12 files changed, 606 insertions(+), 36 deletions(-)
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dc40cde..452bf5d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -523,11 +523,18 @@ CREATE VIEW pg_stat_all_tables AS pg_stat_get_live_tuples(C.oid) AS n_live_tup,
pg_stat_get_dead_tuples(C.oid) AS n_dead_tup, pg_stat_get_mod_since_analyze(C.oid) AS
n_mod_since_analyze,
+ pg_stat_get_vacuum_necessity(C.oid) AS vacuum_required, pg_stat_get_last_vacuum_time(C.oid) as
last_vacuum, pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
pg_stat_get_last_analyze_time(C.oid)as last_analyze, pg_stat_get_last_autoanalyze_time(C.oid) as
last_autoanalyze, pg_stat_get_vacuum_count(C.oid) AS vacuum_count,
+ pg_stat_get_last_vacuum_truncated(C.oid) AS last_vacuum_truncated,
+ pg_stat_get_last_vacuum_untruncated(C.oid) AS last_vacuum_untruncated,
+ pg_stat_get_last_vacuum_index_scans(C.oid) AS last_vacuum_index_scans,
+ pg_stat_get_last_vacuum_oldest_xmin(C.oid) AS last_vacuum_oldest_xmin,
+ pg_stat_get_last_vacuum_status(C.oid) AS last_vacuum_status,
+ pg_stat_get_autovacuum_fail_count(C.oid) AS autovacuum_fail_count,
pg_stat_get_autovacuum_count(C.oid)AS autovacuum_count, pg_stat_get_analyze_count(C.oid) AS analyze_count,
pg_stat_get_autoanalyze_count(C.oid) AS autoanalyze_count
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 48f1e6e..403b76d 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -850,7 +850,7 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex, bool verbose, */
vacuum_set_xid_limits(OldHeap,0, 0, 0, 0, &OldestXmin, &FreezeXid, NULL, &MultiXactCutoff,
- NULL);
+ NULL, NULL, NULL); /* * FreezeXid will become the table's new relfrozenxid, and that
mustn'tgo
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index cbd6e9b..a0c5a12 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -35,6 +35,7 @@#include "catalog/pg_inherits_fn.h"#include "catalog/pg_namespace.h"#include "commands/cluster.h"
+#include "commands/progress.h"#include "commands/vacuum.h"#include "miscadmin.h"#include "nodes/makefuncs.h"
@@ -367,6 +368,9 @@ vacuum(int options, List *relations, VacuumParams *params, } PG_CATCH(); {
+ /* report the final status of this vacuum */
+ lazy_vacuum_cancel_handler();
+ in_vacuum = false; VacuumCostActive = false; PG_RE_THROW();
@@ -585,6 +589,10 @@ get_all_vacuum_rels(void) * Xmax. * - mxactFullScanLimit is a value against which a table's
relminmxidvalue is * compared to produce a full-table vacuum, as with xidFullScanLimit.
+ * - aggressive is set if it is not NULL and set true if the table needs
+ * aggressive scan.
+ * - close_to_wrap_around_limit is set if it is not NULL and set true if it is
+ * in anti-anti-wraparound window. * * xidFullScanLimit and mxactFullScanLimit can be passed as NULL if caller is *
notinterested.
@@ -599,9 +607,11 @@ vacuum_set_xid_limits(Relation rel, TransactionId *freezeLimit,
TransactionId *xidFullScanLimit, MultiXactId *multiXactCutoff,
- MultiXactId *mxactFullScanLimit)
+ MultiXactId *mxactFullScanLimit,
+ bool *aggressive, bool *close_to_wrap_around_limit){ int freezemin;
+ int freezemax; int mxid_freezemin; int effective_multixact_freeze_max_age;
TransactionId limit;
@@ -701,11 +711,13 @@ vacuum_set_xid_limits(Relation rel, *multiXactCutoff = mxactLimit;
- if (xidFullScanLimit != NULL)
+ if (xidFullScanLimit != NULL || aggressive != NULL) { int freezetable;
+ bool maybe_anti_wrapround = false;
- Assert(mxactFullScanLimit != NULL);
+ /* these two output should be requested together */
+ Assert(xidFullScanLimit == NULL || mxactFullScanLimit != NULL); /* * Determine the table freeze
ageto use: as specified by the caller,
@@ -717,7 +729,14 @@ vacuum_set_xid_limits(Relation rel, freezetable = freeze_table_age; if (freezetable
<0) freezetable = vacuum_freeze_table_age;
- freezetable = Min(freezetable, autovacuum_freeze_max_age * 0.95);
+
+ freezemax = autovacuum_freeze_max_age * 0.95;
+ if (freezemax < freezetable)
+ {
+ /* We may be in anti-anti-warparound window */
+ freezetable = freezemax;
+ maybe_anti_wrapround = true;
+ } Assert(freezetable >= 0); /*
@@ -728,7 +747,8 @@ vacuum_set_xid_limits(Relation rel, if (!TransactionIdIsNormal(limit)) limit =
FirstNormalTransactionId;
- *xidFullScanLimit = limit;
+ if (xidFullScanLimit)
+ *xidFullScanLimit = limit; /* * Similar to the above, determine the table freeze age to use
for
@@ -741,10 +761,20 @@ vacuum_set_xid_limits(Relation rel, freezetable = multixact_freeze_table_age; if
(freezetable< 0) freezetable = vacuum_multixact_freeze_table_age;
- freezetable = Min(freezetable,
- effective_multixact_freeze_max_age * 0.95);
+
+ freezemax = effective_multixact_freeze_max_age * 0.95;
+ if (freezemax < freezetable)
+ {
+ /* We may be in anti-anti-warparound window */
+ freezetable = freezemax;
+ maybe_anti_wrapround = true;
+ } Assert(freezetable >= 0);
+ /* We may be in anti-anti-warparound window */
+ if (effective_multixact_freeze_max_age * 0.95 < freezetable)
+ maybe_anti_wrapround = true;
+ /* * Compute MultiXact limit causing a full-table vacuum, being careful * to generate a valid
MultiXactvalue.
@@ -753,11 +783,38 @@ vacuum_set_xid_limits(Relation rel, if (mxactLimit < FirstMultiXactId)
mxactLimit= FirstMultiXactId;
- *mxactFullScanLimit = mxactLimit;
+ if (mxactFullScanLimit)
+ *mxactFullScanLimit = mxactLimit;
+
+ /*
+ * We request an aggressive scan if the table's frozen Xid is now
+ * older than or equal to the requested Xid full-table scan limit; or
+ * if the table's minimum MultiXactId is older than or equal to the
+ * requested mxid full-table scan limit.
+ */
+ if (aggressive)
+ {
+ *aggressive =
+ TransactionIdPrecedesOrEquals(rel->rd_rel->relfrozenxid,
+ limit);
+ *aggressive |=
+ MultiXactIdPrecedesOrEquals(rel->rd_rel->relminmxid,
+ mxactLimit);
+
+ /* set close_to_wrap_around_limit if requested */
+ if (close_to_wrap_around_limit)
+ *close_to_wrap_around_limit =
+ (*aggressive && maybe_anti_wrapround);
+ }
+ else
+ {
+ Assert (!close_to_wrap_around_limit);
+ } } else { Assert(mxactFullScanLimit == NULL);
+ Assert(aggressive == NULL); }}
@@ -1410,6 +1467,9 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) if (!onerel)
{
+ pgstat_report_vacuum(relid, false,
+ 0, 0, 0, 0, 0, PGSTAT_VACUUM_SKIP_LOCK_FAILED,
+ InvalidTransactionId, 0, 0); PopActiveSnapshot();
CommitTransactionCommand(); return false;
@@ -1441,6 +1501,12 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params)
(errmsg("skipping\"%s\" --- only table or database owner can vacuum it",
RelationGetRelationName(onerel)))); relation_close(onerel, lmode);
+
+ pgstat_report_vacuum(RelationGetRelid(onerel),
+ onerel->rd_rel->relisshared,
+ 0, 0, 0, 0, 0, PGSTAT_VACUUM_SKIP_NONTARGET,
+ InvalidTransactionId, 0, 0);
+ PopActiveSnapshot(); CommitTransactionCommand(); return false;
@@ -1458,6 +1524,12 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params)
(errmsg("skipping\"%s\" --- cannot vacuum non-tables or special system tables",
RelationGetRelationName(onerel)))); relation_close(onerel, lmode);
+
+ pgstat_report_vacuum(RelationGetRelid(onerel),
+ onerel->rd_rel->relisshared,
+ 0, 0, 0, 0, 0, PGSTAT_VACUUM_SKIP_NONTARGET,
+ InvalidTransactionId, 0, 0);
+ PopActiveSnapshot(); CommitTransactionCommand(); return false;
@@ -1473,6 +1545,12 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) if
(RELATION_IS_OTHER_TEMP(onerel)) { relation_close(onerel, lmode);
+
+ pgstat_report_vacuum(RelationGetRelid(onerel),
+ onerel->rd_rel->relisshared,
+ 0, 0, 0, 0, 0, PGSTAT_VACUUM_SKIP_NONTARGET,
+ InvalidTransactionId, 0, 0);
+ PopActiveSnapshot(); CommitTransactionCommand(); return false;
@@ -1486,6 +1564,12 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) if
(onerel->rd_rel->relkind== RELKIND_PARTITIONED_TABLE) { relation_close(onerel, lmode);
+
+ pgstat_report_vacuum(RelationGetRelid(onerel),
+ onerel->rd_rel->relisshared,
+ 0, 0, 0, 0, 0, PGSTAT_VACUUM_SKIP_NONTARGET,
+ InvalidTransactionId, 0, 0);
+ PopActiveSnapshot(); CommitTransactionCommand(); /* It's OK to proceed with ANALYZE on this
table*/
@@ -1531,6 +1615,8 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) */ if
(options& VACOPT_FULL) {
+ bool isshared = onerel->rd_rel->relisshared;
+ /* close relation before vacuuming, but hold lock until commit */ relation_close(onerel, NoLock);
onerel = NULL;
@@ -1538,6 +1624,9 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) /* VACUUM
FULLis now a variant of CLUSTER; see cluster.c */ cluster_rel(relid, InvalidOid, false,
(options& VACOPT_VERBOSE) != 0);
+ pgstat_report_vacuum(relid, isshared, 0, 0, 0, 0, 0,
+ PGSTAT_VACUUM_FULL_FINISHED,
+ InvalidTransactionId, 0, 0); } else lazy_vacuum_rel(onerel, options, params,
vac_strategy);
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 172d213..372d661 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -55,6 +55,7 @@#include "postmaster/autovacuum.h"#include "storage/bufmgr.h"#include "storage/freespace.h"
+#include "storage/ipc.h"#include "storage/lmgr.h"#include "utils/lsyscache.h"#include "utils/memutils.h"
@@ -105,6 +106,8 @@typedef struct LVRelStats{
+ Oid reloid; /* oid of the target relation */
+ bool shared; /* is shared relation? */ /* hasindex = true means two-pass strategy; false
meansone-pass */ bool hasindex; /* Overall statistics about rel */
@@ -119,6 +122,7 @@ typedef struct LVRelStats double new_rel_tuples; /* new estimated total # of tuples */
double new_dead_tuples; /* new estimated total # of dead tuples */ BlockNumber pages_removed;
+ BlockNumber pages_not_removed; double tuples_deleted; BlockNumber nonempty_pages; /* actually, last
nonemptypage + 1 */ /* List of TIDs of tuples we intend to delete */
@@ -138,6 +142,7 @@ static int elevel = -1;static TransactionId OldestXmin;static TransactionId FreezeLimit;static
MultiXactIdMultiXactCutoff;
+static LVRelStats *current_lvstats;static BufferAccessStrategy vac_strategy;
@@ -216,6 +221,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, else elevel =
DEBUG2;
+ current_lvstats = NULL; pgstat_progress_start_command(PROGRESS_COMMAND_VACUUM,
RelationGetRelid(onerel));
@@ -227,29 +233,30 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
params->multixact_freeze_min_age, params->multixact_freeze_table_age,
&OldestXmin,&FreezeLimit, &xidFullScanLimit,
- &MultiXactCutoff, &mxactFullScanLimit);
+ &MultiXactCutoff, &mxactFullScanLimit,
+ &aggressive, NULL);
- /*
- * We request an aggressive scan if the table's frozen Xid is now older
- * than or equal to the requested Xid full-table scan limit; or if the
- * table's minimum MultiXactId is older than or equal to the requested
- * mxid full-table scan limit; or if DISABLE_PAGE_SKIPPING was specified.
- */
- aggressive = TransactionIdPrecedesOrEquals(onerel->rd_rel->relfrozenxid,
- xidFullScanLimit);
- aggressive |= MultiXactIdPrecedesOrEquals(onerel->rd_rel->relminmxid,
- mxactFullScanLimit);
+ /* force aggressive scan if DISABLE_PAGE_SKIPPING was specified */ if (options & VACOPT_DISABLE_PAGE_SKIPPING)
aggressive = true; vacrelstats = (LVRelStats *) palloc0(sizeof(LVRelStats));
+ vacrelstats->reloid = RelationGetRelid(onerel);
+ vacrelstats->shared = onerel->rd_rel->relisshared; vacrelstats->old_rel_pages = onerel->rd_rel->relpages;
vacrelstats->old_rel_tuples= onerel->rd_rel->reltuples; vacrelstats->num_index_scans = 0;
vacrelstats->pages_removed= 0;
+ vacrelstats->pages_not_removed = 0; vacrelstats->lock_waiter_detected = false;
+ /*
+ * Register current vacrelstats so that final status can be reported on
+ * interrupts
+ */
+ current_lvstats = vacrelstats;
+ /* Open all indexes of the relation */ vac_open_indexes(onerel, RowExclusiveLock, &nindexes, &Irel);
vacrelstats->hasindex= (nindexes > 0);
@@ -280,8 +287,15 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, * Optionally truncate the
relation. */ if (should_attempt_truncation(vacrelstats))
+ { lazy_truncate_heap(onerel, vacrelstats);
+ /* just paranoia */
+ if (vacrelstats->rel_pages >= vacrelstats->nonempty_pages)
+ vacrelstats->pages_not_removed +=
+ vacrelstats->rel_pages - vacrelstats->nonempty_pages;
+ }
+ /* Report that we are now doing final cleanup */ pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,
PROGRESS_VACUUM_PHASE_FINAL_CLEANUP);
@@ -339,10 +353,22 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, if (new_live_tuples < 0)
new_live_tuples = 0; /* just in case */
- pgstat_report_vacuum(RelationGetRelid(onerel),
+ /* vacuum successfully finished. nothing to do on exit */
+ current_lvstats = NULL;
+
+ pgstat_report_vacuum(vacrelstats->reloid, onerel->rd_rel->relisshared,
new_live_tuples,
- vacrelstats->new_dead_tuples);
+ vacrelstats->new_dead_tuples,
+ vacrelstats->pages_removed,
+ vacrelstats->pages_not_removed,
+ vacrelstats->num_index_scans,
+ OldestXmin,
+ aggressive ?
+ PGSTAT_VACUUM_AGGRESSIVE_FINISHED :
+ PGSTAT_VACUUM_FINISHED,
+ 0, 0);
+ pgstat_progress_end_command(); /* and log the action if appropriate */
@@ -2205,3 +2231,54 @@ heap_page_is_all_visible(Relation rel, Buffer buf, return all_visible;}
+
+/*
+ * lazy_vacuum_cancel_handler - report interrupted vacuum status
+ */
+void
+lazy_vacuum_cancel_handler(void)
+{
+ LVRelStats *stats = current_lvstats;
+ LocalPgBackendStatus *local_beentry;
+ PgBackendStatus *beentry;
+ int phase;
+ int err;
+
+ current_lvstats = NULL;
+
+ /* we have nothing to report */
+ if (!stats)
+ return;
+
+ /* get vacuum progress stored in backend status */
+ local_beentry = pgstat_fetch_stat_local_beentry(MyBackendId);
+ if (!local_beentry)
+ return;
+
+ beentry = &local_beentry->backendStatus;
+
+ Assert (beentry && beentry->st_progress_command == PROGRESS_COMMAND_VACUUM);
+
+ phase = beentry->st_progress_param[PROGRESS_VACUUM_PHASE];
+
+ /* we can reach here both on interrupt and error */
+ if (geterrcode() == ERRCODE_QUERY_CANCELED)
+ err = PGSTAT_VACUUM_CANCELED;
+ else
+ err = PGSTAT_VACUUM_ERROR;
+
+ /*
+ * vacuum has been canceled, report stats numbers without normalization
+ * here. (But currently they are not used.)
+ */
+ pgstat_report_vacuum(stats->reloid,
+ stats->shared,
+ stats->new_rel_tuples,
+ stats->new_dead_tuples,
+ stats->pages_removed,
+ stats->pages_not_removed,
+ stats->num_index_scans,
+ OldestXmin,
+ err,
+ phase, geterrcode());
+}
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index c04c0b5..6c32d0b 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -831,6 +831,121 @@ shutdown:}/*
+ * Returns status string of auto vacuum on the relation
+ */
+char *
+AutoVacuumRequirement(Oid reloid)
+{
+ Relation classRel;
+ Relation rel;
+ TupleDesc pg_class_desc;
+ HeapTuple tuple;
+ Form_pg_class classForm;
+ AutoVacOpts *relopts;
+ PgStat_StatTabEntry *tabentry;
+ PgStat_StatDBEntry *shared;
+ PgStat_StatDBEntry *dbentry;
+ int effective_multixact_freeze_max_age;
+ bool dovacuum;
+ bool doanalyze;
+ bool wraparound;
+ bool aggressive;
+ bool xid_calculated = false;
+ bool in_anti_wa_window = false;
+ char *ret = "not requried";
+
+ /* Compute the multixact age for which freezing is urgent. */
+ effective_multixact_freeze_max_age = MultiXactMemberFreezeThreshold();
+
+ /* Fetch the pgclass entry for this relation */
+ tuple = SearchSysCache1(RELOID, ObjectIdGetDatum(reloid));
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for relation %u", reloid);
+ classForm = (Form_pg_class) GETSTRUCT(tuple);
+
+ /* extract relopts for autovacuum */
+ classRel = heap_open(RelationRelationId, AccessShareLock);
+ pg_class_desc = RelationGetDescr(classRel);
+ relopts = extract_autovac_opts(tuple, pg_class_desc);
+ heap_close(classRel, AccessShareLock);
+
+ /* Fetch the pgstat shared entry and entry for this database */
+ shared = pgstat_fetch_stat_dbentry(InvalidOid);
+ dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);
+
+ /* Fetch the pgstat entry for this table */
+ tabentry = get_pgstat_tabentry_relid(reloid, classForm->relisshared,
+ shared, dbentry);
+
+ /*
+ * Check if the relation needs vacuum. This function is intended to
+ * suggest aggresive vacuum for the last 5% window in
+ * autovacuum_freeze_max_age so the variable wraparound is ignored
+ * here. See vacuum_set_xid_limits for details.
+ */
+ relation_needs_vacanalyze(reloid, relopts, classForm, tabentry,
+ effective_multixact_freeze_max_age,
+ &dovacuum, &doanalyze, &wraparound);
+ ReleaseSysCache(tuple);
+
+ /* get further information if needed */
+ rel = NULL;
+
+ /* don't get stuck with lock */
+ if (ConditionalLockRelationOid(reloid, AccessShareLock))
+ rel = try_relation_open(reloid, NoLock);
+
+ if (rel)
+ {
+ TransactionId OldestXmin, FreezeLimit;
+ MultiXactId MultiXactCutoff;
+
+ vacuum_set_xid_limits(rel,
+ vacuum_freeze_min_age,
+ vacuum_freeze_table_age,
+ vacuum_multixact_freeze_min_age,
+ vacuum_multixact_freeze_table_age,
+ &OldestXmin, &FreezeLimit, NULL,
+ &MultiXactCutoff, NULL,
+ &aggressive, &in_anti_wa_window);
+
+ xid_calculated = true;
+ relation_close(rel, AccessShareLock);
+ }
+
+ /* choose the proper message according to the calculation above */
+ if (xid_calculated)
+ {
+ if (dovacuum)
+ {
+ /* we don't care anti-wraparound if autovacuum is on */
+ if (aggressive)
+ ret = "aggressive";
+ else
+ ret = "partial";
+ }
+ else if (in_anti_wa_window)
+ ret = "close to freeze-limit xid";
+ /* otherwise just "not requried" */
+ }
+ else
+ {
+ /*
+ * failed to compute xid limits. show less-grained messages. We can
+ * use just "required" in the autovacuum case is enough to distinguish
+ * from full-grained messages, but we require additional words in the
+ * case where autovacuum is turned off.
+ */
+ if (dovacuum)
+ ret = "required";
+ else
+ ret = "not required (lock not acquired)";
+ }
+
+ return ret;
+}
+
+/* * Determine the time to sleep, based on the database list. * * The "canlaunch" parameter indicates whether we can
starta worker right now,
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 3a0b49c..721b172 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1403,7 +1403,13 @@ pgstat_report_autovac(Oid dboid) */voidpgstat_report_vacuum(Oid tableoid, bool shared,
- PgStat_Counter livetuples, PgStat_Counter deadtuples)
+ PgStat_Counter livetuples, PgStat_Counter deadtuples,
+ PgStat_Counter pages_removed,
+ PgStat_Counter pages_not_removed,
+ PgStat_Counter num_index_scans,
+ TransactionId oldestxmin,
+ PgStat_Counter status, PgStat_Counter last_phase,
+ PgStat_Counter errcode){ PgStat_MsgVacuum msg;
@@ -1417,6 +1423,13 @@ pgstat_report_vacuum(Oid tableoid, bool shared, msg.m_vacuumtime = GetCurrentTimestamp();
msg.m_live_tuples= livetuples; msg.m_dead_tuples = deadtuples;
+ msg.m_pages_removed = pages_removed;
+ msg.m_pages_not_removed = pages_not_removed;
+ msg.m_num_index_scans = num_index_scans;
+ msg.m_oldest_xmin = oldestxmin;
+ msg.m_vacuum_status = status;
+ msg.m_vacuum_last_phase = last_phase;
+ msg.m_vacuum_errcode = errcode; pgstat_send(&msg, sizeof(msg));}
@@ -4576,17 +4589,25 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create) if (!found)
{ result->numscans = 0;
+ result->tuples_returned = 0; result->tuples_fetched = 0; result->tuples_inserted = 0;
result->tuples_updated= 0; result->tuples_deleted = 0; result->tuples_hot_updated = 0;
+ result->n_live_tuples = 0; result->n_dead_tuples = 0; result->changes_since_analyze = 0;
+ result->n_pages_removed = 0;
+ result->n_pages_not_removed = 0;
+ result->n_index_scans = 0;
+ result->oldest_xmin = InvalidTransactionId;
+ result->blocks_fetched = 0; result->blocks_hit = 0;
+ result->vacuum_timestamp = 0; result->vacuum_count = 0; result->autovac_vacuum_timestamp = 0;
@@ -4595,6 +4616,11 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
result->analyze_count= 0; result->autovac_analyze_timestamp = 0; result->autovac_analyze_count = 0;
+
+ result->vacuum_status = 0;
+ result->vacuum_last_phase = 0;
+ result->vacuum_errcode = 0;
+ result->vacuum_failcount = 0; } return result;
@@ -5979,18 +6005,50 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len) tabentry = pgstat_get_tab_entry(dbentry,
msg->m_tableoid,true);
- tabentry->n_live_tuples = msg->m_live_tuples;
- tabentry->n_dead_tuples = msg->m_dead_tuples;
+ tabentry->vacuum_status = msg->m_vacuum_status;
+ tabentry->vacuum_last_phase = msg->m_vacuum_last_phase;
+ tabentry->vacuum_errcode = msg->m_vacuum_errcode;
- if (msg->m_autovacuum)
- {
- tabentry->autovac_vacuum_timestamp = msg->m_vacuumtime;
- tabentry->autovac_vacuum_count++;
- }
- else
+ /*
+ * We store the numbers only when the vacuum has been completed. They
+ * might be usable to find how much the stopped vacuum processed but we
+ * choose not to show them rather than show bogus numbers.
+ */
+ switch ((StatVacuumStatus)msg->m_vacuum_status) {
- tabentry->vacuum_timestamp = msg->m_vacuumtime;
- tabentry->vacuum_count++;
+ case PGSTAT_VACUUM_FINISHED:
+ case PGSTAT_VACUUM_FULL_FINISHED:
+ case PGSTAT_VACUUM_AGGRESSIVE_FINISHED:
+ tabentry->n_live_tuples = msg->m_live_tuples;
+ tabentry->n_dead_tuples = msg->m_dead_tuples;
+ tabentry->n_pages_removed = msg->m_pages_removed;
+ tabentry->n_pages_not_removed = msg->m_pages_not_removed;
+ tabentry->n_index_scans = msg->m_num_index_scans;
+ tabentry->oldest_xmin = msg->m_oldest_xmin;
+ tabentry->vacuum_failcount = 0;
+
+ if (msg->m_autovacuum)
+ {
+ tabentry->autovac_vacuum_timestamp = msg->m_vacuumtime;
+ tabentry->autovac_vacuum_count++;
+ }
+ else
+ {
+ tabentry->vacuum_timestamp = msg->m_vacuumtime;
+ tabentry->vacuum_count++;
+ }
+ break;
+
+ case PGSTAT_VACUUM_ERROR:
+ case PGSTAT_VACUUM_CANCELED:
+ case PGSTAT_VACUUM_SKIP_LOCK_FAILED:
+ tabentry->vacuum_failcount++;
+ break;
+
+ case PGSTAT_VACUUM_SKIP_NONTARGET:
+ default:
+ /* don't increment failure count for non-target tables */
+ break; }}
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 8d9e7c1..bddc243 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -23,6 +23,7 @@#include "pgstat.h"#include "postmaster/bgworker_internals.h"#include "postmaster/postmaster.h"
+#include "postmaster/autovacuum.h"#include "storage/proc.h"#include "storage/procarray.h"#include "utils/acl.h"
@@ -194,6 +195,156 @@ pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS) PG_RETURN_INT64(result);}
+Datum
+pg_stat_get_vacuum_necessity(PG_FUNCTION_ARGS)
+{
+ Oid relid = PG_GETARG_OID(0);
+
+ PG_RETURN_TEXT_P(cstring_to_text(AutoVacuumRequirement(relid)));
+}
+
+Datum
+pg_stat_get_last_vacuum_truncated(PG_FUNCTION_ARGS)
+{
+ Oid relid = PG_GETARG_OID(0);
+ int64 result;
+ PgStat_StatTabEntry *tabentry;
+
+ if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+ result = 0;
+ else
+ result = (int64) (tabentry->n_pages_removed);
+
+ PG_RETURN_INT64(result);
+}
+
+Datum
+pg_stat_get_last_vacuum_untruncated(PG_FUNCTION_ARGS)
+{
+ Oid relid = PG_GETARG_OID(0);
+ int64 result;
+ PgStat_StatTabEntry *tabentry;
+
+ if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+ result = 0;
+ else
+ result = (int64) (tabentry->n_pages_not_removed);
+
+ PG_RETURN_INT64(result);
+}
+
+Datum
+pg_stat_get_last_vacuum_index_scans(PG_FUNCTION_ARGS)
+{
+ Oid relid = PG_GETARG_OID(0);
+ int32 result;
+ PgStat_StatTabEntry *tabentry;
+
+ if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+ result = 0;
+ else
+ result = (int32) (tabentry->n_index_scans);
+
+ PG_RETURN_INT32(result);
+}
+
+Datum
+pg_stat_get_last_vacuum_oldest_xmin(PG_FUNCTION_ARGS)
+{
+ Oid relid = PG_GETARG_OID(0);
+ TransactionId result;
+ PgStat_StatTabEntry *tabentry;
+
+ if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+ result = InvalidTransactionId;
+ else
+ result = (int32) (tabentry->oldest_xmin);
+
+ return TransactionIdGetDatum(result);
+}
+
+Datum
+pg_stat_get_last_vacuum_status(PG_FUNCTION_ARGS)
+{
+ Oid relid = PG_GETARG_OID(0);
+ char *result = "unknown";
+ PgStat_StatTabEntry *tabentry;
+
+ /*
+ * status string. this must be synced with the strings shown by the
+ * statistics view "pg_stat_progress_vacuum"
+ */
+ static char *phasestr[] =
+ {"initialization",
+ "scanning heap",
+ "vacuuming indexes",
+ "vacuuming heap",
+ "cleaning up indexes",
+ "trucating heap",
+ "performing final cleanup"};
+
+ if ((tabentry = pgstat_fetch_stat_tabentry(relid)) != NULL)
+ {
+ int phase;
+ StatVacuumStatus status;
+
+ status = tabentry->vacuum_status;
+ switch (status)
+ {
+ case PGSTAT_VACUUM_FINISHED:
+ result = "completed";
+ break;
+ case PGSTAT_VACUUM_ERROR:
+ case PGSTAT_VACUUM_CANCELED:
+ phase = tabentry->vacuum_last_phase;
+ /* number of elements of phasestr above */
+ if (phase >= 0 && phase <= 7)
+ result = psprintf("%s while %s",
+ status == PGSTAT_VACUUM_CANCELED ?
+ "canceled" : "error",
+ phasestr[phase]);
+ else
+ result = psprintf("unknown vacuum phase: %d", phase);
+ break;
+ case PGSTAT_VACUUM_SKIP_LOCK_FAILED:
+ result = "skipped - lock unavailable";
+ break;
+
+ case PGSTAT_VACUUM_AGGRESSIVE_FINISHED:
+ result = "aggressive vacuum completed";
+ break;
+
+ case PGSTAT_VACUUM_FULL_FINISHED:
+ result = "vacuum full completed";
+ break;
+
+ case PGSTAT_VACUUM_SKIP_NONTARGET:
+ result = "unvacuumable";
+ break;
+
+ default:
+ result = "unknown status";
+ break;
+ }
+ }
+
+ PG_RETURN_TEXT_P(cstring_to_text(result));
+}
+
+Datum
+pg_stat_get_autovacuum_fail_count(PG_FUNCTION_ARGS)
+{
+ Oid relid = PG_GETARG_OID(0);
+ int64 result;
+ PgStat_StatTabEntry *tabentry;
+
+ if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+ result = 0;
+ else
+ result = (int32) (tabentry->vacuum_failcount);
+
+ PG_RETURN_INT32(result);
+}Datumpg_stat_get_blocks_fetched(PG_FUNCTION_ARGS)
@@ -210,7 +361,6 @@ pg_stat_get_blocks_fetched(PG_FUNCTION_ARGS) PG_RETURN_INT64(result);}
-Datumpg_stat_get_blocks_hit(PG_FUNCTION_ARGS){
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 93c031a..5a1c77d 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2887,6 +2887,20 @@ DATA(insert OID = 3317 ( pg_stat_get_wal_receiver PGNSP PGUID 12 1 0 0 0 f f
fDESCR("statistics:information about WAL receiver");DATA(insert OID = 6118 ( pg_stat_get_subscription PGNSP PGUID
121 0 0 0 f f f f f f s r 1 0 2249 "26" "{26,26,26,23,3220,1184,1184,3220,1184}" "{i,o,o,o,o,o,o,o,o}"
"{subid,subid,relid,pid,received_lsn,last_msg_send_time,last_msg_receipt_time,latest_end_lsn,latest_end_time}"_null_
_null_pg_stat_get_subscription _null_ _null_ _null_ ));DESCR("statistics: information about subscription");
+DATA(insert OID = 3419 ( pg_stat_get_vacuum_necessity PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 25 "26" _null_
_null__null_ _null_ _null_ pg_stat_get_vacuum_necessity _null_ _null_ _null_ ));
+DESCR("statistics: true if needs vacuum");
+DATA(insert OID = 3420 ( pg_stat_get_last_vacuum_untruncated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26"
_null__null_ _null_ _null_ _null_ pg_stat_get_last_vacuum_untruncated _null_ _null_ _null_ ));
+DESCR("statistics: pages left untruncated in the last vacuum");
+DATA(insert OID = 3421 ( pg_stat_get_last_vacuum_truncated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26"
_null__null_ _null_ _null_ _null_ pg_stat_get_last_vacuum_truncated _null_ _null_ _null_ ));
+DESCR("statistics: pages truncated in the last vacuum");
+DATA(insert OID = 3422 ( pg_stat_get_last_vacuum_index_scans PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 23 "26"
_null__null_ _null_ _null_ _null_ pg_stat_get_last_vacuum_index_scans _null_ _null_ _null_ ));
+DESCR("statistics: number of index scans in the last vacuum");
+DATA(insert OID = 3423 ( pg_stat_get_last_vacuum_oldest_xmin PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 28 "26"
_null__null_ _null_ _null_ _null_ pg_stat_get_last_vacuum_oldest_xmin _null_ _null_ _null_ ));
+DESCR("statistics: The oldest xmin used in the last vacuum");
+DATA(insert OID = 3424 ( pg_stat_get_last_vacuum_status PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 25 "26" _null_
_null__null_ _null_ _null_ pg_stat_get_last_vacuum_status _null_ _null_ _null_ ));
+DESCR("statistics: ending status of the last vacuum");
+DATA(insert OID = 3425 ( pg_stat_get_autovacuum_fail_count PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 23 "26"
_null__null_ _null_ _null_ _null_ pg_stat_get_autovacuum_fail_count _null_ _null_ _null_ ));
+DESCR("statistics: number of successively failed vacuum trials");DATA(insert OID = 2026 ( pg_backend_pid
PGNSP PGUID 12 1 0 0 0 f f f f t f s r 0 0 23 "" _null_ _null_ _null_ _null_ _null_ pg_backend_pid _null_ _null_
_null_));DESCR("statistics: current backend PID");DATA(insert OID = 1937 ( pg_stat_get_backend_pid PGNSP PGUID
121 0 0 0 f f f f t f s r 1 0 23 "23" _null_ _null_ _null_ _null_ _null_ pg_stat_get_backend_pid _null_ _null_ _null_
));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 7a7b793..6091bab 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -182,13 +182,15 @@ extern void vacuum_set_xid_limits(Relation rel, TransactionId *freezeLimit,
TransactionId *xidFullScanLimit, MultiXactId *multiXactCutoff,
- MultiXactId *mxactFullScanLimit);
+ MultiXactId *mxactFullScanLimit,
+ bool *aggressive, bool *in_wa_window);extern void vac_update_datfrozenxid(void);extern void
vacuum_delay_point(void);/*in commands/vacuumlazy.c */extern void lazy_vacuum_rel(Relation onerel, int options,
VacuumParams *params, BufferAccessStrategy bstrategy);
+extern void lazy_vacuum_cancel_handler(void);/* in commands/analyze.c */extern void analyze_rel(Oid relid, RangeVar
*relation,int options,
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 089b7c3..bab8332 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -67,6 +67,20 @@ typedef enum StatMsgType PGSTAT_MTYPE_DEADLOCK} StatMsgType;
+/*
+ * The exit status stored in vacuum report.
+ */
+typedef enum StatVacuumStatus
+{
+ PGSTAT_VACUUM_FINISHED,
+ PGSTAT_VACUUM_CANCELED,
+ PGSTAT_VACUUM_ERROR,
+ PGSTAT_VACUUM_SKIP_LOCK_FAILED,
+ PGSTAT_VACUUM_SKIP_NONTARGET,
+ PGSTAT_VACUUM_AGGRESSIVE_FINISHED,
+ PGSTAT_VACUUM_FULL_FINISHED
+} StatVacuumStatus;
+/* ---------- * The data type used for counters. * ----------
@@ -369,6 +383,13 @@ typedef struct PgStat_MsgVacuum TimestampTz m_vacuumtime; PgStat_Counter m_live_tuples;
PgStat_Counterm_dead_tuples;
+ PgStat_Counter m_pages_removed;
+ PgStat_Counter m_pages_not_removed;
+ PgStat_Counter m_num_index_scans;
+ TransactionId m_oldest_xmin;
+ PgStat_Counter m_vacuum_status;
+ PgStat_Counter m_vacuum_last_phase;
+ PgStat_Counter m_vacuum_errcode;} PgStat_MsgVacuum;
@@ -629,6 +650,10 @@ typedef struct PgStat_StatTabEntry PgStat_Counter n_live_tuples; PgStat_Counter
n_dead_tuples; PgStat_Counter changes_since_analyze;
+ PgStat_Counter n_pages_removed;
+ PgStat_Counter n_pages_not_removed;
+ PgStat_Counter n_index_scans;
+ TransactionId oldest_xmin; PgStat_Counter blocks_fetched; PgStat_Counter blocks_hit;
@@ -641,6 +666,11 @@ typedef struct PgStat_StatTabEntry PgStat_Counter analyze_count; TimestampTz
autovac_analyze_timestamp; /* autovacuum initiated */ PgStat_Counter autovac_analyze_count;
+
+ PgStat_Counter vacuum_status;
+ PgStat_Counter vacuum_last_phase;
+ PgStat_Counter vacuum_errcode;
+ PgStat_Counter vacuum_failcount;} PgStat_StatTabEntry;
@@ -1165,7 +1195,13 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type textern void
pgstat_report_autovac(Oiddboid);extern void pgstat_report_vacuum(Oid tableoid, bool shared,
- PgStat_Counter livetuples, PgStat_Counter deadtuples);
+ PgStat_Counter livetuples, PgStat_Counter deadtuples,
+ PgStat_Counter pages_removed,
+ PgStat_Counter pages_not_removed,
+ PgStat_Counter num_index_scans,
+ TransactionId oldextxmin,
+ PgStat_Counter status, PgStat_Counter last_phase,
+ PgStat_Counter errcode);extern void pgstat_report_analyze(Relation rel,
PgStat_Counterlivetuples, PgStat_Counter deadtuples, bool resetcounter);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 3469915..848a322 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -49,6 +49,7 @@ extern int Log_autovacuum_min_duration;extern bool AutoVacuumingActive(void);extern bool
IsAutoVacuumLauncherProcess(void);externbool IsAutoVacuumWorkerProcess(void);
+extern char *AutoVacuumRequirement(Oid reloid);#define IsAnyAutoVacuumProcess() \ (IsAutoVacuumLauncherProcess() ||
IsAutoVacuumWorkerProcess())
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f1c1b44..fb1ea49 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1759,11 +1759,18 @@ pg_stat_all_tables| SELECT c.oid AS relid, pg_stat_get_live_tuples(c.oid) AS n_live_tup,
pg_stat_get_dead_tuples(c.oid)AS n_dead_tup, pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
+ pg_stat_get_vacuum_necessity(c.oid) AS vacuum_required, pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
pg_stat_get_last_autovacuum_time(c.oid)AS last_autovacuum, pg_stat_get_last_analyze_time(c.oid) AS last_analyze,
pg_stat_get_last_autoanalyze_time(c.oid)AS last_autoanalyze, pg_stat_get_vacuum_count(c.oid) AS vacuum_count,
+ pg_stat_get_last_vacuum_truncated(c.oid) AS last_vacuum_truncated,
+ pg_stat_get_last_vacuum_untruncated(c.oid) AS last_vacuum_untruncated,
+ pg_stat_get_last_vacuum_index_scans(c.oid) AS last_vacuum_index_scans,
+ pg_stat_get_last_vacuum_oldest_xmin(c.oid) AS last_vacuum_oldest_xmin,
+ pg_stat_get_last_vacuum_status(c.oid) AS last_vacuum_status,
+ pg_stat_get_autovacuum_fail_count(c.oid) AS autovacuum_fail_count, pg_stat_get_autovacuum_count(c.oid) AS
autovacuum_count, pg_stat_get_analyze_count(c.oid) AS analyze_count, pg_stat_get_autoanalyze_count(c.oid) AS
autoanalyze_count
@@ -1906,11 +1913,18 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid, pg_stat_all_tables.n_live_tup,
pg_stat_all_tables.n_dead_tup, pg_stat_all_tables.n_mod_since_analyze,
+ pg_stat_all_tables.vacuum_required, pg_stat_all_tables.last_vacuum, pg_stat_all_tables.last_autovacuum,
pg_stat_all_tables.last_analyze, pg_stat_all_tables.last_autoanalyze, pg_stat_all_tables.vacuum_count,
+ pg_stat_all_tables.last_vacuum_truncated,
+ pg_stat_all_tables.last_vacuum_untruncated,
+ pg_stat_all_tables.last_vacuum_index_scans,
+ pg_stat_all_tables.last_vacuum_oldest_xmin,
+ pg_stat_all_tables.last_vacuum_status,
+ pg_stat_all_tables.autovacuum_fail_count, pg_stat_all_tables.autovacuum_count,
pg_stat_all_tables.analyze_count, pg_stat_all_tables.autoanalyze_count
@@ -1949,11 +1963,18 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid, pg_stat_all_tables.n_live_tup,
pg_stat_all_tables.n_dead_tup, pg_stat_all_tables.n_mod_since_analyze,
+ pg_stat_all_tables.vacuum_required, pg_stat_all_tables.last_vacuum, pg_stat_all_tables.last_autovacuum,
pg_stat_all_tables.last_analyze, pg_stat_all_tables.last_autoanalyze, pg_stat_all_tables.vacuum_count,
+ pg_stat_all_tables.last_vacuum_truncated,
+ pg_stat_all_tables.last_vacuum_untruncated,
+ pg_stat_all_tables.last_vacuum_index_scans,
+ pg_stat_all_tables.last_vacuum_oldest_xmin,
+ pg_stat_all_tables.last_vacuum_status,
+ pg_stat_all_tables.autovacuum_fail_count, pg_stat_all_tables.autovacuum_count,
pg_stat_all_tables.analyze_count, pg_stat_all_tables.autoanalyze_count
--
2.9.2
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
pgsql-hackers by date: