Re: pg_stat_all_tables.vacuum_count corrupted after pg_stat_reset() - Mailing list pgsql-bugs
From | Tom Lane |
---|---|
Subject | Re: pg_stat_all_tables.vacuum_count corrupted after pg_stat_reset() |
Date | |
Msg-id | 24996.1299512283@sss.pgh.pa.us Whole thread Raw |
In response to | pg_stat_all_tables.vacuum_count corrupted after pg_stat_reset() (Itagaki Takahiro <itagaki.takahiro@gmail.com>) |
Responses |
Re: pg_stat_all_tables.vacuum_count corrupted after pg_stat_reset()
|
List | pgsql-bugs |
Itagaki Takahiro <itagaki.takahiro@gmail.com> writes: > Some of pg_stat_all_tables.vacuum_count, autovacuum_count, analyze_count > and autoanalyze_count will be filled with garbages after pg_stat_reset(). > Not all of the tables, but some of tables had corrupted counters. > It looks we access freed memory. 9187201950435737471 is 7F7F7F7F7F7F7F7F > filled by CLOBBER_FREED_MEMORY. > with PG 9.1dev (dfe18f18d262df731378cbf38a4136f08ca119a5) on 64bit Linux > build with --enable-cassert and --enable-debug. Hmm ... I see some pretty bogus looking results even without having done any pg_stat_reset. In the regression database: regression=# SELECT * FROM pg_stat_all_tables WHERE vacuum_count > 10000; relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del| n_tup_hot_upd | n_live_tup | n_dead_tup | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count -------+------------+---------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+-------------------------------+-----------------+--------------+--- ----------------------------+---------------------+--------------------+----------------+------------------- 28886 | public | rtest_t6 | 5 | 3 | | | 2 | 0 | 1 | 0 | 1 | 1 | | | | | 51389285531657 | 0 | 744 | 449 16959 | public | hash_f8_heap | 5 | 50000 | 5 | 4 | 10000 | 2 | 0 | 0 | 10000 | 2 | 2011-03-06 21:00:40.360407-05 | | | 20 11-03-06 21:01:13.400686-05 | 648518346341351425 | 0 | 0 | 1 16872 | public | stud_emp | 15 | 45 | | | 6 | 0 | 3 | 0 | 3 | 3 | 2011-03-06 21:00:39.925669-05 | | | | 6052837939400933377 | 144115188075855872 | 0 | 0 16953 | public | hash_txt_heap | 2 | 20000 | 5 | 4 | 10000 | 2 | 0 | 0 | 10000 | 2 | 2011-03-06 21:00:40.322777-05 | | | 20 11-03-06 21:01:10.255983-05 | 16044073672507393 | 0 | 0 | 1 16860 | public | emp | 11 | 33 | | | 3 | 0 | 0 | 0 | 3 | 0 | 2011-03-06 21:00:39.872212-05 | | | | 2199023255553 | 0 | 0 | 0 27091 | public | check2_tbl | 1 | 2 | | | 2 | 0 | 0 | 0 | 2 | 0 | 2011-03-06 21:00:40.676605-05 | | | | 17592186044417 | 0 | 17592186044416 | 17592186044416 (6 rows) Even more interesting, although template1 looks OK, right after switching into a freshly created database I see: z1=# SELECT * FROM pg_stat_all_tables WHERE vacuum_count > 10000; relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del| n_tup_hot_upd | n_live_tup | n_dead_tup | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count -------+------------+---------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+-------------+-----------------+--------------+------------------+-- -------------------+---------------------+---------------------+--------------------- 2601 | pg_catalog | pg_am | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 1255 | pg_catalog | pg_proc | 0 | 0 | 23 | 23 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2617 | pg_catalog | pg_operator | 0 | 0 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 1249 | pg_catalog | pg_attribute | 0 | 0 | 55 | 84 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2619 | pg_catalog | pg_statistic | 0 | 0 | 8 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 1247 | pg_catalog | pg_type | 0 | 0 | 13 | 13 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2618 | pg_catalog | pg_rewrite | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2602 | pg_catalog | pg_amop | 0 | 0 | 8 | 10 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2616 | pg_catalog | pg_opclass | 0 | 0 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2603 | pg_catalog | pg_amproc | 0 | 0 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2600 | pg_catalog | pg_aggregate | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2610 | pg_catalog | pg_index | 1 | 109 | 7 | 10 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 1259 | pg_catalog | pg_class | 5 | 916 | 13 | 13 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2838 | pg_toast | pg_toast_2618 | 0 | 0 | 1 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 2615 | pg_catalog | pg_namespace | 1 | 6 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | 9 187201950435737471 | 9187201950435737471 | 9187201950435737471 | 9187201950435737471 (15 rows) So yeah, something is pretty hosed here, but I don't think it's pg_stat_reset's fault ... regards, tom lane
pgsql-bugs by date: