Re: pg_stat_statements: calls under-estimation propagation - Mailing list pgsql-hackers
From | Peter Geoghegan |
---|---|
Subject | Re: pg_stat_statements: calls under-estimation propagation |
Date | |
Msg-id | CAM3SWZQiMfsegWZzsuX5U_DsfHz=aFRVbVf=QOgr=geBv6xT+w@mail.gmail.com Whole thread Raw |
In response to | Re: pg_stat_statements: calls under-estimation propagation (Sameer Thakur <samthakur74@gmail.com>) |
Responses |
Re: pg_stat_statements: calls under-estimation propagation
Re: pg_stat_statements: calls under-estimation propagation |
List | pgsql-hackers |
On Mon, Nov 18, 2013 at 1:54 AM, Sameer Thakur <samthakur74@gmail.com> wrote: > Please find v10 of patch attached. This patch addresses following > review comments I've cleaned this up - revision attached - and marked it "ready for committer". I decided that queryid should be of type oid, not bigint. This is arguably a slight abuse of notation, but since ultimately Oids are just abstract object identifiers (so say the docs), but also because there is no other convenient, minimal way of representing unsigned 32-bit integers in the view that I'm aware of, I'm inclined to think that it's appropriate. In passing, I've made pg_stat_statements invalidate serialized entries if there is a change in major version. This seems desirable as a catch-all invalidator of entries. I note that Tom has objected to exposing the queryid in the past, on numerous occasions. I'm more confident than ever that it's actually the right thing to do. I've had people I don't know walk up to me at conferences and ask me what we don't already expose this at least twice now. There are very strong indications that many people want this, and given that I've documented the caveats, I think that we should trust those calling for this. At the very least, it allows people to GROUP BY queryid, when they don't want things broken out by userid. We're self-evidently already effectively relying on the queryid to be as stable as it is documented to be in this patch. The hash function cannot really change in minor releases, because to do so would at the very least necessitate re-indexing hash indexes, and would of course invalidate internally managed pg_stat_statements entries, both of which are undesirable outcomes (and therefore, for these reasons and more, unlikely). Arguments for not documenting hash_any() do not apply here -- we're already suffering the full consequences of whatever queryid instability may exist. Quite apart from all of that, I think we need to have a way of identifying particular entries for the purposes of supporting per-entry "settings". Recent discussion about min/max query time, or somehow characterizing the distribution of each entry's historic execution time (or whatever) have not considered one important questoin: What are you supposed to do when you find out that there is an outlier (whatever an outlier is)? I won't go into the details, because there is little point, but I'm reasonably confident that it will be virtually impossible for pg_stat_statements itself to usefully classify particular query executions as outliers (I'm not even sure that we could do it if we assumed a normal distribution, which would be bogus, and certainly made very noisy by caching effects and so on. Furthermore, who are we to say that an outlier is an execution time two sigmas to the right? Seems useless). Outliers are typically caused by things like bad plans, or problematic constant values that appear in the most common values list (and are therefore just inherently far more expensive to query against), or lock contention. In all of those cases, with a min/max or something we probably won't even get to know what the problematic constant values were when response time suddenly suffers, because of course pg_stat_statements doesn't help with that. So have we gained much? Even with detective work, the trail might have gone cold by the time the outlier is examined. And, monitoring is only one concern -- what about alerting? The bigger point is that having this will facilitate being able to mark entries as "SLA queries" or something like that, where if their execution exceeds a time (specified by the DBA per entry), that is assumed to be very bad, and pg_stat_statements complains. That gets dumped to the logs (which ought to be a rare occurrence when the facility is used correctly). Of course, the (typically particularly problematic) constant values *do* appear in the logs, and there is a greppable keyword, potentially for the benefit of a tool like tail_n_mail. You could think of this as being like a smart log_min_duration_statement. I think that the DBA needs to tell pg_stat_statements what to care about, and what bad looks like. If the DBA doesn't know where to start specifying such things, the 5 queries with the most calls can usefully have this set to (mean_execution_time * 1.5) or something. SLA queries can also be "pinned", perhaps (that is, given a "stay of execution" when eviction occurs). -- Peter Geoghegan
Attachment
pgsql-hackers by date: