Thread: What is a typical precision of gettimeofday()?

What is a typical precision of gettimeofday()?

From
Peter Eisentraut
Date:
Over in the thread discussing the addition of UUIDv7 support [0], there 
is some uncertainty about what timestamp precision one can expect from 
gettimeofday().

UUIDv7 uses milliseconds since Unix epoch, but can optionally use up to 
12 additional bits of timestamp precision (see [1]), but it can also 
just use a counter instead of the extra precision.  The current patch 
uses the counter method "because of portability concerns" (source code 
comment).

I feel that we don't actually have any information about this 
portability concern.  Does anyone know what precision we can expect from 
gettimeofday()?  Can we expect the full microsecond precision usually?


[0]: 
https://www.postgresql.org/message-id/flat/CAAhFRxitJv=yoGnXUgeLB_O+M7J2BJAmb5jqAT9gZ3bij3uLDA@mail.gmail.com
[1]: 
https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis#section-6.2-5.6.1



Re: What is a typical precision of gettimeofday()?

From
Aleksander Alekseev
Date:
Hi,

cc: Andrey

> Over in the thread discussing the addition of UUIDv7 support [0], there
> is some uncertainty about what timestamp precision one can expect from
> gettimeofday().
>
> UUIDv7 uses milliseconds since Unix epoch, but can optionally use up to
> 12 additional bits of timestamp precision (see [1]), but it can also
> just use a counter instead of the extra precision.  The current patch
> uses the counter method "because of portability concerns" (source code
> comment).
>
> I feel that we don't actually have any information about this
> portability concern.  Does anyone know what precision we can expect from
> gettimeofday()?  Can we expect the full microsecond precision usually?

Specifically in the UUIDv7 application the goal is to generate not
necessarily time-precise UUIDs but rather do our best to get *unique*
UUIDs. As I understand, this is the actual reason why the patch needs
counters.

As Linux man page puts it:

"""
The time returned by gettimeofday() is affected by discontinuous jumps
in the system  time  (e.g.,  if  the  system  administrator  manually
changes the system time).
"""

On top of that MacOS man page says:

"""
The resolution of the system clock is hardware dependent, and the time
may be updated continuously or in ``ticks.''
"""

On Windows our gettimeofday() implementation is a wrapper for
GetSystemTimePreciseAsFileTime(). The corresponding MSDN page [1] is
somewhat laconic.

Considering the number of environments PostgreSQL can run in (OS +
hardware + virtualization technologies) and the fact that
hardware/software changes I doubt that it's realistic to expect any
particular guarantees from gettimeofday() in the general case.

[1]: https://learn.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimepreciseasfiletime


--
Best regards,
Aleksander Alekseev



Re: What is a typical precision of gettimeofday()?

From
Peter Eisentraut
Date:
On 19.03.24 10:38, Aleksander Alekseev wrote:
> Considering the number of environments PostgreSQL can run in (OS +
> hardware + virtualization technologies) and the fact that
> hardware/software changes I doubt that it's realistic to expect any
> particular guarantees from gettimeofday() in the general case.

If we want to be robust without any guarantees from gettimeofday(), then 
arguably gettimeofday() is not the right underlying function to use for 
UUIDv7.  I'm not arguing that, I think we can assume some reasonable 
baseline for what gettimeofday() produces.  But it would be good to get 
some information about what that might be.

Btw., here is util-linux saying

     /* Assume that the gettimeofday() has microsecond granularity */

https://github.com/util-linux/util-linux/blob/master/libuuid/src/gen_uuid.c#L232




Re: What is a typical precision of gettimeofday()?

From
Jelte Fennema-Nio
Date:
On Wed, 20 Mar 2024 at 07:35, Peter Eisentraut <peter@eisentraut.org> wrote:
> If we want to be robust without any guarantees from gettimeofday(), then
> arguably gettimeofday() is not the right underlying function to use for
> UUIDv7.

There's also clock_gettime which exposes its resolution using clock_getres



Re: What is a typical precision of gettimeofday()?

From
"Andrey M. Borodin"
Date:

> On 19 Mar 2024, at 13:28, Peter Eisentraut <peter@eisentraut.org> wrote:
>
> I feel that we don't actually have any information about this portability concern.  Does anyone know what precision
wecan expect from gettimeofday()?  Can we expect the full microsecond precision usually? 

At PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to
measurenanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns
buckets).

I’ll add cc Hannu, and also pg_test_timing module authors Ants ang Greg. Maybe they can add some context.


Best regards, Andrey Borodin.


Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
I plan to send patch to pg_test_timing in a day or two 

the underlying time precision on modern linux seems to be

2 ns for some Intel CPUs
10 ns for Zen4
40 ns for ARM (Ampere)

---
Hannu



|


 

On Tue, Jun 18, 2024 at 7:48 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:


> On 19 Mar 2024, at 13:28, Peter Eisentraut <peter@eisentraut.org> wrote:
>
> I feel that we don't actually have any information about this portability concern.  Does anyone know what precision we can expect from gettimeofday()?  Can we expect the full microsecond precision usually?

At PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to measure nanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns buckets).

I’ll add cc Hannu, and also pg_test_timing module authors Ants ang Greg. Maybe they can add some context.


Best regards, Andrey Borodin.

Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
Here is the output of nanosecond-precision pg_test_timing on M1 Macbook Air

/work/postgres/src/bin/pg_test_timing % ./pg_test_timing
Testing timing overhead for 3 seconds.
Per loop time including overhead: 21.54 ns
Histogram of timing durations:
   <= ns   % of total  running %      count
       0      49.1655    49.1655   68481688
       1       0.0000    49.1655          0
       3       0.0000    49.1655          0
       7       0.0000    49.1655          0
      15       0.0000    49.1655          0
      31       0.0000    49.1655          0
      63      50.6890    99.8545   70603742
     127       0.1432    99.9976     199411
     255       0.0015    99.9991       2065
     511       0.0001    99.9992         98
    1023       0.0001    99.9993        140
    2047       0.0002    99.9995        284
    4095       0.0000    99.9995         50
    8191       0.0000    99.9996         65
   16383       0.0002    99.9997        240
   32767       0.0001    99.9998        128
   65535       0.0001    99.9999         97
  131071       0.0000    99.9999         58
  262143       0.0000   100.0000         44
  524287       0.0000   100.0000         22
 1048575       0.0000   100.0000          7
 2097151       0.0000   100.0000          2
First 128 exact nanoseconds:
       0      49.1655    49.1655   68481688
      41      16.8964    66.0619   23534708
      42      33.7926    99.8545   47069034
      83       0.0835    99.9380     116362
      84       0.0419    99.9799      58349
     125       0.0177    99.9976      24700

As you see the 40 ns internal tick gets somehow blurred into
not-quite-40-ns timing step

On Linux / ARM Ampere where __builtin_readcyclecounter() works (it
compiles but crashes on Mac OS M1, I have not yet tested on Linux M1)
the tick is exactly 40 ns and I'd expect it to be the same on M1.


On Tue, Jun 18, 2024 at 5:08 PM Hannu Krosing <hannuk@google.com> wrote:
>
> I plan to send patch to pg_test_timing in a day or two
>
> the underlying time precision on modern linux seems to be
>
> 2 ns for some Intel CPUs
> 10 ns for Zen4
> 40 ns for ARM (Ampere)
>
> ---
> Hannu
>
>
>
> |
>
>
>
>
> On Tue, Jun 18, 2024 at 7:48 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>>
>>
>>
>> > On 19 Mar 2024, at 13:28, Peter Eisentraut <peter@eisentraut.org> wrote:
>> >
>> > I feel that we don't actually have any information about this portability concern.  Does anyone know what
precisionwe can expect from gettimeofday()?  Can we expect the full microsecond precision usually? 
>>
>> At PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to
measurenanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns
buckets).
>>
>> I’ll add cc Hannu, and also pg_test_timing module authors Ants ang Greg. Maybe they can add some context.
>>
>>
>> Best regards, Andrey Borodin.



Re: What is a typical precision of gettimeofday()?

From
Peter Eisentraut
Date:
On 18.06.24 07:47, Andrey M. Borodin wrote:
> 
> 
>> On 19 Mar 2024, at 13:28, Peter Eisentraut <peter@eisentraut.org> wrote:
>>
>> I feel that we don't actually have any information about this portability concern.  Does anyone know what precision
wecan expect from gettimeofday()?  Can we expect the full microsecond precision usually?
 
> 
> At PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to
measurenanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns
buckets).

AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't 
really address the original question.




Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Peter Eisentraut <peter@eisentraut.org> writes:
> AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't 
> really address the original question.

It's not exactly hard to make it do so (see attached).

I tried this on several different machines, and my conclusion is that
gettimeofday() reports full microsecond precision on any platform
anybody is likely to be running PG on today.  Even my one surviving
pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like
this:

$ ./pg_test_timing
Testing timing overhead for 3 seconds.
Per loop time including overhead: 901.41 ns
Histogram of timing durations:
  < us   % of total      count
     1     10.46074     348148
     2     89.51495    2979181
     4      0.00574        191
     8      0.00430        143
    16      0.00691        230
    32      0.00376        125
    64      0.00012          4
   128      0.00303        101
   256      0.00027          9
   512      0.00009          3
  1024      0.00009          3

I also modified pg_test_timing to measure nanoseconds not
microseconds (second patch attached), and got this:

$ ./pg_test_timing
Testing timing overhead for 3 seconds.
Per loop time including overhead: 805.50 ns
Histogram of timing durations:
  < ns   % of total      count
     1     19.84234     739008
     2      0.00000          0
     4      0.00000          0
     8      0.00000          0
    16      0.00000          0
    32      0.00000          0
    64      0.00000          0
   128      0.00000          0
   256      0.00000          0
   512      0.00000          0
  1024     80.14013    2984739
  2048      0.00078         29
  4096      0.00658        245
  8192      0.00290        108
 16384      0.00252         94
 32768      0.00250         93
 65536      0.00016          6
131072      0.00185         69
262144      0.00008          3
524288      0.00008          3
1048576      0.00008          3

confirming that when the result changes it generally does so by 1usec.

Applying just the second patch, I find that clock_gettime on this
old hardware seems to be limited to 1us resolution, but on my more
modern machines (mac M1, x86_64) it can tick at 40ns or less.
Even a raspberry pi 4 shows

$ ./pg_test_timing 
Testing timing overhead for 3 seconds.
Per loop time including overhead: 69.12 ns
Histogram of timing durations:
  < ns   % of total      count
     1      0.00000          0
     2      0.00000          0
     4      0.00000          0
     8      0.00000          0
    16      0.00000          0
    32      0.00000          0
    64     37.59583   16317040
   128     62.38568   27076131
   256      0.01674       7265
   512      0.00002          8
  1024      0.00000          0
  2048      0.00000          0
  4096      0.00153        662
  8192      0.00019         83
 16384      0.00001          3
 32768      0.00001          5

suggesting that the clock_gettime resolution is better than 64 ns.

So I concur with Hannu that it's time to adjust pg_test_timing to
resolve nanoseconds not microseconds.  I gather he's created a
patch that does more than mine below, so I'll wait for that.

            regards, tom lane

diff --git a/src/include/portability/instr_time.h b/src/include/portability/instr_time.h
index a6fc1922f2..5509d23d2f 100644
--- a/src/include/portability/instr_time.h
+++ b/src/include/portability/instr_time.h
@@ -84,7 +84,7 @@ typedef struct instr_time

 /* Use clock_gettime() */

-#include <time.h>
+#include <sys/time.h>

 /*
  * The best clockid to use according to the POSIX spec is CLOCK_MONOTONIC,
@@ -111,10 +111,10 @@ static inline instr_time
 pg_clock_gettime_ns(void)
 {
     instr_time    now;
-    struct timespec tmp;
+    struct timeval tmp;

-    clock_gettime(PG_INSTR_CLOCK, &tmp);
-    now.ticks = tmp.tv_sec * NS_PER_S + tmp.tv_nsec;
+    gettimeofday(&tmp, NULL);
+    now.ticks = tmp.tv_sec * NS_PER_S + tmp.tv_usec * 1000;

     return now;
 }
diff --git a/src/bin/pg_test_timing/pg_test_timing.c b/src/bin/pg_test_timing/pg_test_timing.c
index c29d6f8762..ea2b565b14 100644
--- a/src/bin/pg_test_timing/pg_test_timing.c
+++ b/src/bin/pg_test_timing/pg_test_timing.c
@@ -133,7 +133,7 @@ test_timing(unsigned int duration)
     total_time = duration > 0 ? duration * INT64CONST(1000000) : 0;

     INSTR_TIME_SET_CURRENT(start_time);
-    cur = INSTR_TIME_GET_MICROSEC(start_time);
+    cur = INSTR_TIME_GET_NANOSEC(start_time);

     while (time_elapsed < total_time)
     {
@@ -142,7 +142,7 @@ test_timing(unsigned int duration)

         prev = cur;
         INSTR_TIME_SET_CURRENT(temp);
-        cur = INSTR_TIME_GET_MICROSEC(temp);
+        cur = INSTR_TIME_GET_NANOSEC(temp);
         diff = cur - prev;

         /* Did time go backwards? */
@@ -183,7 +183,7 @@ output(uint64 loop_count)
 {
     int64        max_bit = 31,
                 i;
-    char       *header1 = _("< us");
+    char       *header1 = _("< ns");
     char       *header2 = /* xgettext:no-c-format */ _("% of total");
     char       *header3 = _("count");
     int            len1 = strlen(header1);

Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
(resending to list and other CC:s )

Hi Tom

This is my current patch which also adds running % and optionally uses
faster way to count leading zeros, though I did not  see a change from
that.

It also bucketizes first 128 ns to get better overview of exact behaviour.

We may want to put reporting this behind a flag

---
Hannu

On Wed, Jun 19, 2024 at 6:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Peter Eisentraut <peter@eisentraut.org> writes:
> > AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't
> > really address the original question.
>
> It's not exactly hard to make it do so (see attached).
>
> I tried this on several different machines, and my conclusion is that
> gettimeofday() reports full microsecond precision on any platform
> anybody is likely to be running PG on today.  Even my one surviving
> pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like
> this:
>
> $ ./pg_test_timing
> Testing timing overhead for 3 seconds.
> Per loop time including overhead: 901.41 ns
> Histogram of timing durations:
>   < us   % of total      count
>      1     10.46074     348148
>      2     89.51495    2979181
>      4      0.00574        191
>      8      0.00430        143
>     16      0.00691        230
>     32      0.00376        125
>     64      0.00012          4
>    128      0.00303        101
>    256      0.00027          9
>    512      0.00009          3
>   1024      0.00009          3
>
> I also modified pg_test_timing to measure nanoseconds not
> microseconds (second patch attached), and got this:
>
> $ ./pg_test_timing
> Testing timing overhead for 3 seconds.
> Per loop time including overhead: 805.50 ns
> Histogram of timing durations:
>   < ns   % of total      count
>      1     19.84234     739008
>      2      0.00000          0
>      4      0.00000          0
>      8      0.00000          0
>     16      0.00000          0
>     32      0.00000          0
>     64      0.00000          0
>    128      0.00000          0
>    256      0.00000          0
>    512      0.00000          0
>   1024     80.14013    2984739
>   2048      0.00078         29
>   4096      0.00658        245
>   8192      0.00290        108
>  16384      0.00252         94
>  32768      0.00250         93
>  65536      0.00016          6
> 131072      0.00185         69
> 262144      0.00008          3
> 524288      0.00008          3
> 1048576      0.00008          3
>
> confirming that when the result changes it generally does so by 1usec.
>
> Applying just the second patch, I find that clock_gettime on this
> old hardware seems to be limited to 1us resolution, but on my more
> modern machines (mac M1, x86_64) it can tick at 40ns or less.
> Even a raspberry pi 4 shows
>
> $ ./pg_test_timing
> Testing timing overhead for 3 seconds.
> Per loop time including overhead: 69.12 ns
> Histogram of timing durations:
>   < ns   % of total      count
>      1      0.00000          0
>      2      0.00000          0
>      4      0.00000          0
>      8      0.00000          0
>     16      0.00000          0
>     32      0.00000          0
>     64     37.59583   16317040
>    128     62.38568   27076131
>    256      0.01674       7265
>    512      0.00002          8
>   1024      0.00000          0
>   2048      0.00000          0
>   4096      0.00153        662
>   8192      0.00019         83
>  16384      0.00001          3
>  32768      0.00001          5
>
> suggesting that the clock_gettime resolution is better than 64 ns.
>
> So I concur with Hannu that it's time to adjust pg_test_timing to
> resolve nanoseconds not microseconds.  I gather he's created a
> patch that does more than mine below, so I'll wait for that.
>
>                         regards, tom lane
>

Attachment

Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
I also have a variant that uses the low-level CPU cycle counter
directly (attached)

It currently only works on clang, as it is done using
__builtin_readcyclecounter() in order to support both x64 and ARM.

This one is there to understand the overhead of the calculations when
going from cycle counter to POSIX time struct

This works OK with Clang, but we should probably not integrate this
directly into the code as it has some interesting corner cases. For
example Apple's clang does compile  __builtin_readcyclecounter() but
crashes with unknown instruction when trying to run it.

Therefore I have not integrated it into Makefile so if you want to use
it, just copy it into src/bin/pg_test_timing and run

cd src/bin/pgtest_timing
mv pg_test_timing.c pg_test_timing.c.backup
cp pg_test_cyclecounter.c pg_test_timing.c
make
mv pg_test_timing pg_test_cyclecounter
mv pg_test_timing.c.backup pg_test_timing.c

It gives output like

Testing timing overhead for 3 seconds.
Total 25000001 ticks in 1000000073 ns, 24999999.175000 ticks per ns
This CPU is running at 24999999 ticks / second, will run test for 74999997 ticks
loop_count 287130958Per loop time including overhead: 10.45 ns, min: 0
ticks (0.0 ns), same: 212854591
Total ticks in: 74999997, in: 3000000541 nr
Log2 histogram of timing durations:
 < ticks (    < ns)   % of total  running %    count
       1 (    40.0)      74.1315   74.1315  212854591
       2 (    80.0)      25.8655   99.9970   74267876
       4 (   160.0)       0.0000   99.9970          7
       8 (   320.0)       0.0000   99.9970          3
      16 (   640.0)       0.0000   99.9970          1
      32 (  1280.0)       0.0000   99.9971         27
      64 (  2560.0)       0.0012   99.9983       3439
     128 (  5120.0)       0.0016   99.9999       4683
     256 ( 10240.0)       0.0001  100.0000        265
     512 ( 20480.0)       0.0000  100.0000         37
    1024 ( 40960.0)       0.0000  100.0000         23
    2048 ( 81920.0)       0.0000  100.0000          6
First 64 ticks --
       0 (     0.0)      74.1315   74.1315  212854591
       1 (    40.0)      25.8655   99.9970   74267876
       2 (    80.0)       0.0000   99.9970          2
       3 (   120.0)       0.0000   99.9970          5
       4 (   160.0)       0.0000   99.9970          2
       6 (   240.0)       0.0000   99.9983          1
      13 (   520.0)       0.0000  100.0000          1
...
      59 (  2360.0)       0.0000  100.0000        140
      60 (  2400.0)       0.0001  100.0000        210
      61 (  2440.0)       0.0002  100.0000        497
      62 (  2480.0)       0.0002  100.0000        524
      63 (  2520.0)       0.0001  100.0000        391


If you run on some interesting hardware, please share the results.

If we her enough I will put them together in a spreadsheet and share

I also attach my lightning talk slides here

---
Hannu

On Thu, Jun 20, 2024 at 12:41 PM Hannu Krosing <hannuk@google.com> wrote:
>
> (resending to list and other CC:s )
>
> Hi Tom
>
> This is my current patch which also adds running % and optionally uses
> faster way to count leading zeros, though I did not  see a change from
> that.
>
> It also bucketizes first 128 ns to get better overview of exact behaviour.
>
> We may want to put reporting this behind a flag
>
> ---
> Hannu
>
> On Wed, Jun 19, 2024 at 6:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >
> > Peter Eisentraut <peter@eisentraut.org> writes:
> > > AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't
> > > really address the original question.
> >
> > It's not exactly hard to make it do so (see attached).
> >
> > I tried this on several different machines, and my conclusion is that
> > gettimeofday() reports full microsecond precision on any platform
> > anybody is likely to be running PG on today.  Even my one surviving
> > pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like
> > this:
> >
> > $ ./pg_test_timing
> > Testing timing overhead for 3 seconds.
> > Per loop time including overhead: 901.41 ns
> > Histogram of timing durations:
> >   < us   % of total      count
> >      1     10.46074     348148
> >      2     89.51495    2979181
> >      4      0.00574        191
> >      8      0.00430        143
> >     16      0.00691        230
> >     32      0.00376        125
> >     64      0.00012          4
> >    128      0.00303        101
> >    256      0.00027          9
> >    512      0.00009          3
> >   1024      0.00009          3
> >
> > I also modified pg_test_timing to measure nanoseconds not
> > microseconds (second patch attached), and got this:
> >
> > $ ./pg_test_timing
> > Testing timing overhead for 3 seconds.
> > Per loop time including overhead: 805.50 ns
> > Histogram of timing durations:
> >   < ns   % of total      count
> >      1     19.84234     739008
> >      2      0.00000          0
> >      4      0.00000          0
> >      8      0.00000          0
> >     16      0.00000          0
> >     32      0.00000          0
> >     64      0.00000          0
> >    128      0.00000          0
> >    256      0.00000          0
> >    512      0.00000          0
> >   1024     80.14013    2984739
> >   2048      0.00078         29
> >   4096      0.00658        245
> >   8192      0.00290        108
> >  16384      0.00252         94
> >  32768      0.00250         93
> >  65536      0.00016          6
> > 131072      0.00185         69
> > 262144      0.00008          3
> > 524288      0.00008          3
> > 1048576      0.00008          3
> >
> > confirming that when the result changes it generally does so by 1usec.
> >
> > Applying just the second patch, I find that clock_gettime on this
> > old hardware seems to be limited to 1us resolution, but on my more
> > modern machines (mac M1, x86_64) it can tick at 40ns or less.
> > Even a raspberry pi 4 shows
> >
> > $ ./pg_test_timing
> > Testing timing overhead for 3 seconds.
> > Per loop time including overhead: 69.12 ns
> > Histogram of timing durations:
> >   < ns   % of total      count
> >      1      0.00000          0
> >      2      0.00000          0
> >      4      0.00000          0
> >      8      0.00000          0
> >     16      0.00000          0
> >     32      0.00000          0
> >     64     37.59583   16317040
> >    128     62.38568   27076131
> >    256      0.01674       7265
> >    512      0.00002          8
> >   1024      0.00000          0
> >   2048      0.00000          0
> >   4096      0.00153        662
> >   8192      0.00019         83
> >  16384      0.00001          3
> >  32768      0.00001          5
> >
> > suggesting that the clock_gettime resolution is better than 64 ns.
> >
> > So I concur with Hannu that it's time to adjust pg_test_timing to
> > resolve nanoseconds not microseconds.  I gather he's created a
> > patch that does more than mine below, so I'll wait for that.
> >
> >                         regards, tom lane
> >

Attachment

Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
Another thing I changed in reporting was to report <= ns instead of < ns

This was inspired by not wanting to report "zero ns" as "< 1 ns" and
easiest was to change them all to <=

On Thu, Jun 20, 2024 at 12:41 PM Hannu Krosing <hannuk@google.com> wrote:
>
> (resending to list and other CC:s )
>
> Hi Tom
>
> This is my current patch which also adds running % and optionally uses
> faster way to count leading zeros, though I did not  see a change from
> that.
>
> It also bucketizes first 128 ns to get better overview of exact behaviour.
>
> We may want to put reporting this behind a flag
>
> ---
> Hannu
>
> On Wed, Jun 19, 2024 at 6:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >
> > Peter Eisentraut <peter@eisentraut.org> writes:
> > > AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't
> > > really address the original question.
> >
> > It's not exactly hard to make it do so (see attached).
> >
> > I tried this on several different machines, and my conclusion is that
> > gettimeofday() reports full microsecond precision on any platform
> > anybody is likely to be running PG on today.  Even my one surviving
> > pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like
> > this:
> >
> > $ ./pg_test_timing
> > Testing timing overhead for 3 seconds.
> > Per loop time including overhead: 901.41 ns
> > Histogram of timing durations:
> >   < us   % of total      count
> >      1     10.46074     348148
> >      2     89.51495    2979181
> >      4      0.00574        191
> >      8      0.00430        143
> >     16      0.00691        230
> >     32      0.00376        125
> >     64      0.00012          4
> >    128      0.00303        101
> >    256      0.00027          9
> >    512      0.00009          3
> >   1024      0.00009          3
> >
> > I also modified pg_test_timing to measure nanoseconds not
> > microseconds (second patch attached), and got this:
> >
> > $ ./pg_test_timing
> > Testing timing overhead for 3 seconds.
> > Per loop time including overhead: 805.50 ns
> > Histogram of timing durations:
> >   < ns   % of total      count
> >      1     19.84234     739008
> >      2      0.00000          0
> >      4      0.00000          0
> >      8      0.00000          0
> >     16      0.00000          0
> >     32      0.00000          0
> >     64      0.00000          0
> >    128      0.00000          0
> >    256      0.00000          0
> >    512      0.00000          0
> >   1024     80.14013    2984739
> >   2048      0.00078         29
> >   4096      0.00658        245
> >   8192      0.00290        108
> >  16384      0.00252         94
> >  32768      0.00250         93
> >  65536      0.00016          6
> > 131072      0.00185         69
> > 262144      0.00008          3
> > 524288      0.00008          3
> > 1048576      0.00008          3
> >
> > confirming that when the result changes it generally does so by 1usec.
> >
> > Applying just the second patch, I find that clock_gettime on this
> > old hardware seems to be limited to 1us resolution, but on my more
> > modern machines (mac M1, x86_64) it can tick at 40ns or less.
> > Even a raspberry pi 4 shows
> >
> > $ ./pg_test_timing
> > Testing timing overhead for 3 seconds.
> > Per loop time including overhead: 69.12 ns
> > Histogram of timing durations:
> >   < ns   % of total      count
> >      1      0.00000          0
> >      2      0.00000          0
> >      4      0.00000          0
> >      8      0.00000          0
> >     16      0.00000          0
> >     32      0.00000          0
> >     64     37.59583   16317040
> >    128     62.38568   27076131
> >    256      0.01674       7265
> >    512      0.00002          8
> >   1024      0.00000          0
> >   2048      0.00000          0
> >   4096      0.00153        662
> >   8192      0.00019         83
> >  16384      0.00001          3
> >  32768      0.00001          5
> >
> > suggesting that the clock_gettime resolution is better than 64 ns.
> >
> > So I concur with Hannu that it's time to adjust pg_test_timing to
> > resolve nanoseconds not microseconds.  I gather he's created a
> > patch that does more than mine below, so I'll wait for that.
> >
> >                         regards, tom lane
> >



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> This is my current patch which also adds running % and optionally uses
> faster way to count leading zeros, though I did not  see a change from
> that.

I've not read the patch yet, but I did create a CF entry [1]
to get some CI cycles on this.  The cfbot complains [2] about

[19:24:31.951] pg_test_timing.c: In function ‘output’:
[19:24:31.951] pg_test_timing.c:229:11: error: format ‘%ld’ expects argument of type ‘long int’, but argument 3 has
type‘int64’ {aka ‘long long int’} [-Werror=format=] 
[19:24:31.951]   229 |    printf("%*ld    %*.4f  %*.4f %*lld\n",
[19:24:31.951]       |           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[19:24:31.951]   230 |       Max(8, len1), i,
[19:24:31.951]       |                     ~
[19:24:31.951]       |                     |
[19:24:31.951]       |                     int64 {aka long long int}

which seems a bit confused, but anyway you cannot assume that int64 is
a match for "%ld", or "%lld" either.  What we generally do for this
elsewhere is to explicitly cast printf arguments to long long int.

Also there's this on Windows:

[19:23:48.231] ../src/bin/pg_test_timing/pg_test_timing.c(162): warning C4067: unexpected tokens following preprocessor
directive- expected a newline 

            regards, tom lane

[1] https://commitfest.postgresql.org/48/5066/
[2] http://cfbot.cputube.org/highlights/all.html#5066



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
I wrote:
> Hannu Krosing <hannuk@google.com> writes:
>> This is my current patch which also adds running % and optionally uses
>> faster way to count leading zeros, though I did not  see a change from
>> that.

> I've not read the patch yet, but I did create a CF entry [1]
> to get some CI cycles on this.  The cfbot complains [2] about
> [ a couple of things ]

Here's a cleaned-up code patch addressing the cfbot complaints
and making the output logic a bit neater.

I think this is committable code-wise, but the documentation needs
work, if not indeed a complete rewrite.  The examples are now
horribly out of date, and it seems that the "Clock Hardware and Timing
Accuracy" section is quite obsolete as well, since it suggests that
the best available accuracy is ~100ns.

TBH I'm inclined to rip most of the OS-specific and hardware-specific
information out of there, as it's not something we're likely to
maintain well even if we got it right for current reality.

            regards, tom lane

diff --git a/src/bin/pg_test_timing/pg_test_timing.c b/src/bin/pg_test_timing/pg_test_timing.c
index ce7aad4b25..a6a271aef1 100644
--- a/src/bin/pg_test_timing/pg_test_timing.c
+++ b/src/bin/pg_test_timing/pg_test_timing.c
@@ -9,19 +9,24 @@
 #include <limits.h>

 #include "getopt_long.h"
+#include "port/pg_bitutils.h"
 #include "portability/instr_time.h"

 static const char *progname;

 static unsigned int test_duration = 3;

+/* record duration in powers of 2 nanoseconds */
+static long long int histogram[32];
+
+/* record counts of first 128 durations directly */
+#define NUM_DIRECT 128
+static long long int direct_histogram[NUM_DIRECT];
+
 static void handle_args(int argc, char *argv[]);
 static uint64 test_timing(unsigned int duration);
 static void output(uint64 loop_count);

-/* record duration in powers of 2 microseconds */
-static long long int histogram[32];
-
 int
 main(int argc, char *argv[])
 {
@@ -111,7 +116,6 @@ handle_args(int argc, char *argv[])
         exit(1);
     }

-
     printf(ngettext("Testing timing overhead for %u second.\n",
                     "Testing timing overhead for %u seconds.\n",
                     test_duration),
@@ -130,19 +134,19 @@ test_timing(unsigned int duration)
                 end_time,
                 temp;

-    total_time = duration > 0 ? duration * INT64CONST(1000000) : 0;
+    total_time = duration > 0 ? duration * INT64CONST(1000000000) : 0;

     INSTR_TIME_SET_CURRENT(start_time);
-    cur = INSTR_TIME_GET_MICROSEC(start_time);
+    cur = INSTR_TIME_GET_NANOSEC(start_time);

     while (time_elapsed < total_time)
     {
         int32        diff,
-                    bits = 0;
+                    bits;

         prev = cur;
         INSTR_TIME_SET_CURRENT(temp);
-        cur = INSTR_TIME_GET_MICROSEC(temp);
+        cur = INSTR_TIME_GET_NANOSEC(temp);
         diff = cur - prev;

         /* Did time go backwards? */
@@ -154,18 +158,21 @@ test_timing(unsigned int duration)
         }

         /* What is the highest bit in the time diff? */
-        while (diff)
-        {
-            diff >>= 1;
-            bits++;
-        }
+        if (diff > 0)
+            bits = pg_leftmost_one_pos32(diff) + 1;
+        else
+            bits = 0;

         /* Update appropriate duration bucket */
         histogram[bits]++;

+        /* Update direct histogram of time diffs */
+        if (diff < NUM_DIRECT)
+            direct_histogram[diff]++;
+
         loop_count++;
         INSTR_TIME_SUBTRACT(temp, start_time);
-        time_elapsed = INSTR_TIME_GET_MICROSEC(temp);
+        time_elapsed = INSTR_TIME_GET_NANOSEC(temp);
     }

     INSTR_TIME_SET_CURRENT(end_time);
@@ -181,28 +188,65 @@ test_timing(unsigned int duration)
 static void
 output(uint64 loop_count)
 {
-    int64        max_bit = 31,
+    int            max_bit = 31,
                 i;
-    char       *header1 = _("< us");
+    char       *header1 = _("<= ns");
+    char       *header1b = _("ns");
     char       *header2 = /* xgettext:no-c-format */ _("% of total");
-    char       *header3 = _("count");
+    char       *header3 = /* xgettext:no-c-format */ _("running %");
+    char       *header4 = _("count");
     int            len1 = strlen(header1);
     int            len2 = strlen(header2);
     int            len3 = strlen(header3);
+    int            len4 = strlen(header4);
+    double        rprct;

     /* find highest bit value */
     while (max_bit > 0 && histogram[max_bit] == 0)
         max_bit--;

+    /* set minimum column widths */
+    len1 = Max(8, len1);
+    len2 = Max(10, len2);
+    len3 = Max(10, len3);
+    len4 = Max(10, len4);
+
     printf(_("Histogram of timing durations:\n"));
-    printf("%*s   %*s %*s\n",
-           Max(6, len1), header1,
-           Max(10, len2), header2,
-           Max(10, len3), header3);
-
-    for (i = 0; i <= max_bit; i++)
-        printf("%*ld    %*.5f %*lld\n",
-               Max(6, len1), 1l << i,
-               Max(10, len2) - 1, (double) histogram[i] * 100 / loop_count,
-               Max(10, len3), histogram[i]);
+    printf("%*s   %*s %*s %*s\n",
+           len1, header1,
+           len2, header2,
+           len3, header3,
+           len4, header4);
+
+    for (i = 0, rprct = 0; i <= max_bit; i++)
+    {
+        double        prct = (double) histogram[i] * 100 / loop_count;
+
+        rprct += prct;
+        printf("%*ld   %*.4f %*.4f %*lld\n",
+               len1, (1L << i) - 1,
+               len2, prct,
+               len3, rprct,
+               len4, histogram[i]);
+    }
+
+    printf(_("\nTiming durations less than %d ns:\n"), NUM_DIRECT);
+    printf("%*s   %*s %*s %*s\n",
+           len1, header1b,
+           len2, header2,
+           len3, header3,
+           len4, header4);
+
+    for (i = 0, rprct = 0; i < NUM_DIRECT; i++)
+    {
+        double        prct = (double) direct_histogram[i] * 100 / loop_count;
+
+        rprct += prct;
+        if (direct_histogram[i])
+            printf("%*d   %*.4f %*.4f %*lld\n",
+                   len1, i,
+                   len2, prct,
+                   len3, rprct,
+                   len4, direct_histogram[i]);
+    }
 }

Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
BTW, getting back to the original point of the thread: I duplicated
Hannu's result showing that on Apple M1 the clock tick seems to be
about 40ns.  But look at what I got with the v2 patch on my main
workstation (full output attached):

$ ./pg_test_timing
...
Per loop time including overhead: 16.60 ns
...
Timing durations less than 128 ns:
      ns   % of total  running %      count
      15       3.2738     3.2738    5914914
      16      49.0772    52.3510   88668783
      17      36.4662    88.8172   65884173
      18       9.5639    98.3810   17279249
      19       1.5746    99.9556    2844873
      20       0.0416    99.9972      75125
      21       0.0004    99.9976        757
...

It sure looks like this is exact-to-the-nanosecond results,
since the modal values match the overall per-loop timing,
and there are no zero measurements.

This is a Dell tower from 2021, running RHEL8 on an Intel Xeon W-2245.
Not exactly top-of-the-line stuff.

            regards, tom lane

Testing timing overhead for 3 seconds.
Per loop time including overhead: 16.60 ns
Histogram of timing durations:
   <= ns   % of total  running %      count
       0       0.0000     0.0000          0
       1       0.0000     0.0000          0
       3       0.0000     0.0000          0
       7       0.0000     0.0000          0
      15       3.2738     3.2738    5914914
      31      96.7241    99.9980  174753569
      63       0.0001    99.9981        265
     127       0.0001    99.9982        110
     255       0.0000    99.9982         29
     511       0.0000    99.9982          0
    1023       0.0013    99.9995       2410
    2047       0.0003    99.9999        586
    4095       0.0000    99.9999         10
    8191       0.0000    99.9999          3
   16383       0.0001   100.0000        228
   32767       0.0000   100.0000          1

Timing durations less than 128 ns:
      ns   % of total  running %      count
      15       3.2738     3.2738    5914914
      16      49.0772    52.3510   88668783
      17      36.4662    88.8172   65884173
      18       9.5639    98.3810   17279249
      19       1.5746    99.9556    2844873
      20       0.0416    99.9972      75125
      21       0.0004    99.9976        757
      22       0.0001    99.9978        252
      23       0.0001    99.9978        101
      24       0.0000    99.9979         50
      25       0.0000    99.9979         32
      26       0.0000    99.9979         28
      27       0.0000    99.9979         27
      28       0.0000    99.9979         19
      29       0.0000    99.9979         21
      30       0.0000    99.9980         34
      31       0.0000    99.9980         45
      32       0.0000    99.9980         42
      33       0.0000    99.9980         30
      34       0.0000    99.9980         17
      35       0.0000    99.9980         14
      36       0.0000    99.9980         13
      37       0.0000    99.9981         13
      38       0.0000    99.9981         13
      39       0.0000    99.9981         12
      40       0.0000    99.9981          8
      41       0.0000    99.9981         13
      42       0.0000    99.9981         12
      43       0.0000    99.9981          9
      44       0.0000    99.9981          9
      45       0.0000    99.9981          5
      46       0.0000    99.9981          3
      47       0.0000    99.9981          3
      48       0.0000    99.9981          5
      49       0.0000    99.9981          5
      50       0.0000    99.9981          4
      51       0.0000    99.9981          4
      52       0.0000    99.9981          2
      53       0.0000    99.9981          2
      54       0.0000    99.9981          2
      55       0.0000    99.9981          4
      56       0.0000    99.9981          1
      57       0.0000    99.9981          2
      58       0.0000    99.9981          2
      59       0.0000    99.9981          4
      60       0.0000    99.9981          2
      61       0.0000    99.9981          6
      62       0.0000    99.9981          2
      63       0.0000    99.9981          2
      64       0.0000    99.9981          2
      65       0.0000    99.9981          2
      66       0.0000    99.9981          2
      67       0.0000    99.9981          2
      72       0.0000    99.9981          1
      73       0.0000    99.9981          1
      74       0.0000    99.9981          1
      77       0.0000    99.9981          1
      78       0.0000    99.9981          2
      87       0.0000    99.9981          1
      91       0.0000    99.9981          1
      94       0.0000    99.9981          4
      95       0.0000    99.9981          3
      96       0.0000    99.9982         16
      97       0.0000    99.9982          5
      98       0.0000    99.9982          7
      99       0.0000    99.9982         14
     100       0.0000    99.9982         10
     101       0.0000    99.9982         10
     102       0.0000    99.9982          4
     103       0.0000    99.9982          2
     104       0.0000    99.9982          2
     105       0.0000    99.9982          1
     106       0.0000    99.9982          3
     107       0.0000    99.9982          1
     108       0.0000    99.9982          4
     109       0.0000    99.9982          4
     112       0.0000    99.9982          1
     115       0.0000    99.9982          2
     127       0.0000    99.9982          1

Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
Hi Tom,

On various Intel CPUs I got either steps close to single nanosecond or
sometimes a little more on older ones

One specific CPU moved in in 2 tick increments while the ration to ns
was 2,1/1 or 2100 ticks per microsecond.

On Zen4 AMD the step seems to  be 10 ns, even though the tick-to-ns
ratio is 2.6 / 1 , so reading ticks directly gives 26, 54, ...

Also, reading directly in ticks on M1 gave "loop time including
overhead: 2.13 ns" (attached code works on Clang, not sure about GCC)


I'll also take a look at the docs and try to propose something

Do we also need tests for this one ?

----
Hannu



On Tue, Jul 2, 2024 at 7:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> BTW, getting back to the original point of the thread: I duplicated
> Hannu's result showing that on Apple M1 the clock tick seems to be
> about 40ns.  But look at what I got with the v2 patch on my main
> workstation (full output attached):
>
> $ ./pg_test_timing
> ...
> Per loop time including overhead: 16.60 ns
> ...
> Timing durations less than 128 ns:
>       ns   % of total  running %      count
>       15       3.2738     3.2738    5914914
>       16      49.0772    52.3510   88668783
>       17      36.4662    88.8172   65884173
>       18       9.5639    98.3810   17279249
>       19       1.5746    99.9556    2844873
>       20       0.0416    99.9972      75125
>       21       0.0004    99.9976        757
> ...
>
> It sure looks like this is exact-to-the-nanosecond results,
> since the modal values match the overall per-loop timing,
> and there are no zero measurements.
>
> This is a Dell tower from 2021, running RHEL8 on an Intel Xeon W-2245.
> Not exactly top-of-the-line stuff.
>
>                         regards, tom lane
>

Attachment

Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
Also the step on M1 is slightly above 40ns (41.7ns) , but exactly 40
ns on Ampere Altra.

## M1 on MacBooc Air

Testing timing overhead for 3 seconds.
Total 24000177 ticks in 1000000056 ns, 24000175.655990 ticks per ns
This CPU is running at 24000175 ticks / second, will run test for 72000525 ticks
loop_count 1407639953Per loop time including overhead: 2.13 ns, min: 0
ticks (0.0 ns), same: 1335774969
Total ticks in: 72000525, in: 3000002260 nr
Log2(x+1) histogram of timing durations:
<= ticks ( <= ns) % of total running % count
0 ( 41.7) 94.8946 94.8946 1335774969
2 ( 83.3) 5.1051 99.9997 71861227
6 ( 166.7) 0.0001 99.9998 757
14 ( 333.3) 0.0000 99.9998 0
30 ( 666.7) 0.0002 99.9999 2193
62 ( 1333.3) 0.0000 100.0000 274
126 ( 2666.6) 0.0000 100.0000 446
254 ( 5333.3) 0.0000 100.0000 87
First 64 ticks --
0 ( 0.0) 94.8946 94.8946 1335774969
1 ( 41.7) 5.1032 99.9997 71834980
2 ( 83.3) 0.0019 99.9998 26247
3 ( 125.0) 0.0001 99.9998 757
15 ( 625.0) 0.0000 100.0000 1

## Ampere Altra

Testing timing overhead for 3 seconds.
Total 25000002 ticks in 1000000074 ns, 25000000.150000 ticks per ns
This CPU is running at 25000000 ticks / second, will run test for 75000000 ticks
loop_count 291630863Per loop time including overhead: 10.29 ns, min: 0
ticks (0.0 ns), same: 217288944
Total ticks in: 75000000, in: 3000000542 nr
Log2(x+1) histogram of timing durations:
<= ticks ( <= ns) % of total running % count
0 ( 40.0) 74.5082 74.5082 217288944
2 ( 80.0) 25.4886 99.9968 74332703
6 ( 160.0) 0.0000 99.9968 5
14 ( 320.0) 0.0000 99.9968 0
30 ( 640.0) 0.0000 99.9968 31
62 ( 1280.0) 0.0011 99.9979 3123
126 ( 2560.0) 0.0020 99.9999 5848
254 ( 5120.0) 0.0001 100.0000 149
510 ( 10240.0) 0.0000 100.0000 38
1022 ( 20480.0) 0.0000 100.0000 21
2046 ( 40960.0) 0.0000 100.0000 1
First 64 ticks --
0 ( 0.0) 74.5082 74.5082 217288944
1 ( 40.0) 25.4886 99.9968 74332699
2 ( 80.0) 0.0000 99.9968 4
3 ( 120.0) 0.0000 99.9968 1
4 ( 160.0) 0.0000 99.9968 3

On Tue, Jul 2, 2024 at 7:31 PM Hannu Krosing <hannuk@google.com> wrote:
>
> Hi Tom,
>
> On various Intel CPUs I got either steps close to single nanosecond or
> sometimes a little more on older ones
>
> One specific CPU moved in in 2 tick increments while the ration to ns
> was 2,1/1 or 2100 ticks per microsecond.
>
> On Zen4 AMD the step seems to  be 10 ns, even though the tick-to-ns
> ratio is 2.6 / 1 , so reading ticks directly gives 26, 54, ...
>
> Also, reading directly in ticks on M1 gave "loop time including
> overhead: 2.13 ns" (attached code works on Clang, not sure about GCC)
>
>
> I'll also take a look at the docs and try to propose something
>
> Do we also need tests for this one ?
>
> ----
> Hannu
>
>
>
> On Tue, Jul 2, 2024 at 7:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >
> > BTW, getting back to the original point of the thread: I duplicated
> > Hannu's result showing that on Apple M1 the clock tick seems to be
> > about 40ns.  But look at what I got with the v2 patch on my main
> > workstation (full output attached):
> >
> > $ ./pg_test_timing
> > ...
> > Per loop time including overhead: 16.60 ns
> > ...
> > Timing durations less than 128 ns:
> >       ns   % of total  running %      count
> >       15       3.2738     3.2738    5914914
> >       16      49.0772    52.3510   88668783
> >       17      36.4662    88.8172   65884173
> >       18       9.5639    98.3810   17279249
> >       19       1.5746    99.9556    2844873
> >       20       0.0416    99.9972      75125
> >       21       0.0004    99.9976        757
> > ...
> >
> > It sure looks like this is exact-to-the-nanosecond results,
> > since the modal values match the overall per-loop timing,
> > and there are no zero measurements.
> >
> > This is a Dell tower from 2021, running RHEL8 on an Intel Xeon W-2245.
> > Not exactly top-of-the-line stuff.
> >
> >                         regards, tom lane
> >



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> Also, reading directly in ticks on M1 gave "loop time including
> overhead: 2.13 ns" (attached code works on Clang, not sure about GCC)

I don't think we should mess with that, given the portability
problems you mentioned upthread.

> I'll also take a look at the docs and try to propose something

OK.

> Do we also need tests for this one ?

Yeah, it was annoying me that we are eating the overhead of a TAP test
for pg_test_timing and yet it covers barely a third of the code [1].
We obviously can't expect any specific numbers out of a test, but I
was contemplating running "pg_test_timing -d 1" and just checking for
(a) zero exit code and (b) the expected header lines in the output.

            regards, tom lane

[1] https://coverage.postgresql.org/src/bin/pg_test_timing/pg_test_timing.c.gcov.html



Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
On Tue, Jul 2, 2024 at 7:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>

> > Do we also need tests for this one ?
>
> Yeah, it was annoying me that we are eating the overhead of a TAP test
> for pg_test_timing and yet it covers barely a third of the code [1].
> We obviously can't expect any specific numbers out of a test, but I
> was contemplating running "pg_test_timing -d 1" and just checking for
> (a) zero exit code and (b) the expected header lines in the output.

At least "does it run" tests should be there -

For example with the current toolchain on MacOS I was able to compile
__builtin_readcyclecounter(); but it crashed when the result was
executed.

The same code compiled *and run* fine on same laptop with Ubuntu 24.04

We might also want to have some testing about available speedups from
pg_bitmanip.h being used, but that could be tricky to test in an
universal way.

--
Hannu



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> At least "does it run" tests should be there -

> For example with the current toolchain on MacOS I was able to compile
> __builtin_readcyclecounter(); but it crashed when the result was
> executed.

> The same code compiled *and run* fine on same laptop with Ubuntu 24.04

> We might also want to have some testing about available speedups from
> pg_bitmanip.h being used, but that could be tricky to test in an
> universal way.

Keep in mind that pg_test_timing is not just some random exercise in a
vacuum.  The point of it IMV is to provide data about the performance
one can expect from the instr_time.h infrastructure, which bears on
what kind of resolution EXPLAIN ANALYZE and other features have.  So
if we did want to depend on read_tsc() or __builtin_readcyclecounter()
or what-have-you, the way to go about it would be to change
instr_time.h to compile code that uses that.  I would consider that
to be a separate patch from what we're doing to pg_test_timing here.

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
"Andrey M. Borodin"
Date:

> On 2 Jul 2024, at 22:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> It sure looks like this is exact-to-the-nanosecond results,
> since the modal values match the overall per-loop timing,
> and there are no zero measurements.

That’s a very interesting result, from the UUID POV!
If time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess
monotonicityalmost for free. 
Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10
morebits for nanoseconds… 

Thanks!


Best regards, Andrey Borodin.


Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
"Andrey M. Borodin" <x4mmm@yandex-team.ru> writes:
> That’s a very interesting result, from the UUID POV!
> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess
monotonicityalmost for free. 
> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10
morebits for nanoseconds… 

Keep in mind also that instr_time.h does not pretend to provide
real time --- the clock origin is arbitrary.  But these results
do give me additional confidence that gettimeofday() should be
good to the microsecond on any remotely-modern platform.

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
Aleksander Alekseev
Date:
Hi,

> That’s a very interesting result, from the UUID POV!
> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess
monotonicityalmost for free. 
> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10
morebits for nanoseconds… 

A counter is mandatory since someone can for instance change the
system's time while the process is generating UUIDs. You can't
generally assume that local time of the system is monotonic.

--
Best regards,
Aleksander Alekseev



Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
On Wed, Jul 3, 2024 at 10:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Keep in mind also that instr_time.h does not pretend to provide
> real time --- the clock origin is arbitrary.  But these results
> do give me additional confidence that gettimeofday() should be
> good to the microsecond on any remotely-modern platform.

The only platform I have found where the resolution is only a
microsecond is RISC-V ( https://www.sifive.com/boards/hifive-unmatched
)

Everywhere else it seems to be much more precise.

--
Hannu



Re: What is a typical precision of gettimeofday()?

From
"Andrey M. Borodin"
Date:

> On 3 Jul 2024, at 13:48, Aleksander Alekseev <aleksander@timescale.com> wrote:
>
> Hi,
>
>> That’s a very interesting result, from the UUID POV!
>> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have
interprocessmonotonicity almost for free. 
>> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10
morebits for nanoseconds… 
>
> A counter is mandatory since someone can for instance change the
> system's time while the process is generating UUIDs. You can't
> generally assume that local time of the system is monotonic.

AFAIR according to RFC when time jumps backwards, we just use time microseconds as a counter. Until time starts to
advanceagain. 


Best regards, Andrey Borodin.


Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
We currently do something similar with OIDs where we just keep
generating them and then testing for conflicts.

I don't think this is the best way to do it but it mostly works when
you can actually test for uniqueness, like for example in TOAST or
system tables.

Not sure this works even reasonably well for UUIDv7.

--
Hannu

On Wed, Jul 3, 2024 at 12:38 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>
>
>
> > On 3 Jul 2024, at 13:48, Aleksander Alekseev <aleksander@timescale.com> wrote:
> >
> > Hi,
> >
> >> That’s a very interesting result, from the UUID POV!
> >> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have
interprocessmonotonicity almost for free. 
> >> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use
10more bits for nanoseconds… 
> >
> > A counter is mandatory since someone can for instance change the
> > system's time while the process is generating UUIDs. You can't
> > generally assume that local time of the system is monotonic.
>
> AFAIR according to RFC when time jumps backwards, we just use time microseconds as a counter. Until time starts to
advanceagain. 
>
>
> Best regards, Andrey Borodin.



Re: What is a typical precision of gettimeofday()?

From
"Andrey M. Borodin"
Date:

> On 3 Jul 2024, at 16:29, Hannu Krosing <hannuk@google.com> wrote:
>
> We currently do something similar with OIDs where we just keep
> generating them and then testing for conflicts.
>
> I don't think this is the best way to do it but it mostly works when
> you can actually test for uniqueness, like for example in TOAST or
> system tables.
>
> Not sure this works even reasonably well for UUIDv7.

Uniqueness is ensured with extra 60+ bits of randomness. Timestamp and counter\microseconds are there to promote
sortability(thus ensuring data locality). 


Best regards, Andrey Borodin.


Re: What is a typical precision of gettimeofday()?

From
Aleksander Alekseev
Date:
Hi,

> We currently do something similar with OIDs where we just keep
> generating them and then testing for conflicts.
>
> I don't think this is the best way to do it but it mostly works when
> you can actually test for uniqueness, like for example in TOAST or
> system tables.
>
> Not sure this works even reasonably well for UUIDv7.

UUIDv7 is not guaranteed to be unique. It just does it best to reduce
the number of possible conflicts. So I don't think we should worry
about it.


-- 
Best regards,
Aleksander Alekseev



Re: What is a typical precision of gettimeofday()?

From
"Andrey M. Borodin"
Date:

> On 2 Jul 2024, at 20:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Here's a cleaned-up code patch addressing the cfbot complaints
> and making the output logic a bit neater.
>
> I think this is committable code-wise, but the documentation needs
> work, if not indeed a complete rewrite.  The examples are now
> horribly out of date, and it seems that the "Clock Hardware and Timing
> Accuracy" section is quite obsolete as well, since it suggests that
> the best available accuracy is ~100ns.
>
> TBH I'm inclined to rip most of the OS-specific and hardware-specific
> information out of there, as it's not something we're likely to
> maintain well even if we got it right for current reality.

Hi Tom!

This thread has associated CF entry which is marked as RwF [0]. But the change proved to be useful [1] in understanding
whatwe can expect from time source. 
It was requested many times before [2,3]. Reading through this thread it seems to me that my questions about
applicationof the pg_test_timing somehow switched focus from this patch. However, I'd appreciate if it was applied.
Nanosecondsseem important to me. 
Let me know if I can help in any way. Thanks!


Best regards, Andrey Borodin.

[0] https://commitfest.postgresql.org/48/5066/
[1] https://www.postgresql.org/message-id/CAD21AoC4iAr7M_OgtHA0HZMezot68_0vwUCQjjXKk2iW89w0Jg@mail.gmail.com
[2] https://www.postgresql.org/message-id/CAMT0RQQJWNoki_vmckYb5J1j-BENBE0YtD6jJmVg--Hyvt7Wjg%40mail.gmail.com
[3]
https://www.postgresql.org/message-id/flat/198ef658-a5b7-9862-2017-faf85d59e3a8%40gmail.com#37d8292e93ec34407a41e7cbf56e5481


Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
"Andrey M. Borodin" <x4mmm@yandex-team.ru> writes:
> This thread has associated CF entry which is marked as RwF [0]. But the change proved to be useful [1] in
understandingwhat we can expect from time source. 
> It was requested many times before [2,3]. Reading through this thread it seems to me that my questions about
applicationof the pg_test_timing somehow switched focus from this patch. However, I'd appreciate if it was applied.
Nanosecondsseem important to me. 
> Let me know if I can help in any way. Thanks!

Basically, I think the code is ready, but I was awaiting Hannu's
proposal on rewriting the documentation for pg_test_timing.
Do you want to have a go at that?

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
Hi Tom,

Did I understand correctly that you would prefer the documentation part to be much smaller than it is now and all current the discussion about things that are not strictly about the pg_test_timing to be not in the docs for it ?

My current plan is to move the other discussions around timing from th edocs to PostgreSQL Wiki.

Would this be good ?

---
Best Regards
Hannu

On Sat, Nov 2, 2024 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Andrey M. Borodin" <x4mmm@yandex-team.ru> writes:
> This thread has associated CF entry which is marked as RwF [0]. But the change proved to be useful [1] in understanding what we can expect from time source.
> It was requested many times before [2,3]. Reading through this thread it seems to me that my questions about application of the pg_test_timing somehow switched focus from this patch. However, I'd appreciate if it was applied. Nanoseconds seem important to me.
> Let me know if I can help in any way. Thanks!

Basically, I think the code is ready, but I was awaiting Hannu's
proposal on rewriting the documentation for pg_test_timing.
Do you want to have a go at that?

                        regards, tom lane

Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> Did I understand correctly that you would prefer the documentation part to
> be much smaller than it is now and all current the discussion about things
> that are not strictly about the pg_test_timing to be not in the docs for it
> ?

Well, I would like for the docs not to readily get stale again.
I don't foresee us maintaining this page better in future than
we have so far.

> My current plan is to move the other discussions around timing from th
> edocs to PostgreSQL Wiki.

That could work.

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> Here is the latest patch with documentation only for the utility
> itself. Old general discussion moved to PostgreSQL Wiki with link to
> it in "See Also " section

Thanks for continuing to work on this!

> Also added a flag to select number of direct values to show

Hmm ... I agree with having a way to control the length of that output,
but I don't think that specifying a count is the most useful way to
do it.  Particularly with a default of only 10, it seems way too
likely to cut off important information.

What do you think of instead specifying the limit as the maximum
running-percentage to print, with a default of say 99.99%?  That
gives me results like

Observed timing durations up to 99.9900%:
      ns   % of total  running %      count
      15       4.5452     4.5452    8313178
      16      58.3785    62.9237  106773354
      17      33.6840    96.6078   61607584
      18       3.1151    99.7229    5697480
      19       0.2638    99.9867     482570
      20       0.0093    99.9960      17054

In the attached I also made it print the largest observed
duration, which seems like it might be useful information.

As previously threatened, I also added a test case to
improve the code coverage.

            regards, tom lane

diff --git a/doc/src/sgml/ref/pgtesttiming.sgml b/doc/src/sgml/ref/pgtesttiming.sgml
index a5eb3aa25e0..417433c68f4 100644
--- a/doc/src/sgml/ref/pgtesttiming.sgml
+++ b/doc/src/sgml/ref/pgtesttiming.sgml
@@ -32,9 +32,16 @@ PostgreSQL documentation
  <para>
   <application>pg_test_timing</application> is a tool to measure the timing overhead
   on your system and confirm that the system time never moves backwards.
+ </para>
+ <para>
   Systems that are slow to collect timing data can give less accurate
   <command>EXPLAIN ANALYZE</command> results.
  </para>
+ <para>
+  This tool is also helpful to determine if
+  the <varname>track_io_timing</varname> configuration parameter is likely
+  to produce useful results.
+ </para>
  </refsect1>

  <refsect1>
@@ -59,6 +66,21 @@ PostgreSQL documentation
       </listitem>
      </varlistentry>

+     <varlistentry>
+      <term><option>-c <replaceable class="parameter">cutoff</replaceable></option></term>
+      <term><option>--cutoff=<replaceable class="parameter">cutoff</replaceable></option></term>
+      <listitem>
+       <para>
+        Specifies the cutoff percentage for the list of exact observed
+        timing durations (that is, the changes in the system clock value
+        from one reading to the next).  The list will end once the running
+        percentage total reaches or exceeds this value, except that the
+        largest observed value will always be printed.  The default cutoff
+        is 99.99.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry>
       <term><option>-V</option></term>
       <term><option>--version</option></term>
@@ -92,205 +114,81 @@ PostgreSQL documentation
   <title>Interpreting Results</title>

   <para>
-   Good results will show most (>90%) individual timing calls take less than
-   one microsecond. Average per loop overhead will be even lower, below 100
-   nanoseconds. This example from an Intel i7-860 system using a TSC clock
-   source shows excellent performance:
-
-<screen><![CDATA[
-Testing timing overhead for 3 seconds.
-Per loop time including overhead: 35.96 ns
-Histogram of timing durations:
-  < us   % of total      count
-     1     96.40465   80435604
-     2      3.59518    2999652
-     4      0.00015        126
-     8      0.00002         13
-    16      0.00000          2
-]]></screen>
+   The first block of output has four columns, with rows showing a
+   shifted-by-one log2(ns) histogram of timing measurements (that is, the
+   differences between successive clock readings).  This is not the
+   classic log2(n+1) histogram as it counts zeros separately and then
+   switches to log2(ns) starting from value 1.
   </para>
-
   <para>
-   Note that different units are used for the per loop time than the
-   histogram. The loop can have resolution within a few nanoseconds (ns),
-   while the individual timing calls can only resolve down to one microsecond
-   (us).
+   The columns are:
+   <itemizedlist spacing="compact">
+    <listitem>
+     <simpara>nanosecond value that is >= the measurements in this
+     bucket</simpara>
+    </listitem>
+    <listitem>
+     <simpara>percentage of measurements in this bucket</simpara>
+    </listitem>
+    <listitem>
+     <simpara>running-sum percentage of measurements in this and previous
+     buckets</simpara>
+    </listitem>
+    <listitem>
+     <simpara>count of measurements in this bucket</simpara>
+    </listitem>
+   </itemizedlist>
   </para>
-
- </refsect2>
- <refsect2>
-  <title>Measuring Executor Timing Overhead</title>
-
   <para>
-   When the query executor is running a statement using
-   <command>EXPLAIN ANALYZE</command>, individual operations are timed as well
-   as showing a summary.  The overhead of your system can be checked by
-   counting rows with the <application>psql</application> program:
-
-<screen>
-CREATE TABLE t AS SELECT * FROM generate_series(1,100000);
-\timing
-SELECT COUNT(*) FROM t;
-EXPLAIN ANALYZE SELECT COUNT(*) FROM t;
-</screen>
+   The results below show that 99.99% of timing loops took between 8 and
+   31 nanoseconds, with the worst case somewhere between 32768 and 65535
+   nanoseconds.
   </para>
-
   <para>
-   The i7-860 system measured runs the count query in 9.8 ms while
-   the <command>EXPLAIN ANALYZE</command> version takes 16.6 ms, each
-   processing just over 100,000 rows.  That 6.8 ms difference means the timing
-   overhead per row is 68 ns, about twice what pg_test_timing estimated it
-   would be.  Even that relatively small amount of overhead is making the fully
-   timed count statement take almost 70% longer.  On more substantial queries,
-   the timing overhead would be less problematic.
+   The second block of output goes into more detail, showing the exact
+   timing differences observed.  For brevity this list is cut off when the
+   running-sum percentage exceeds the user-selectable cutoff value.
+   However, the largest observed difference is always shown.
   </para>

- </refsect2>
-
- <refsect2>
-  <title>Changing Time Sources</title>
   <para>
-   On some newer Linux systems, it's possible to change the clock source used
-   to collect timing data at any time.  A second example shows the slowdown
-   possible from switching to the slower acpi_pm time source, on the same
-   system used for the fast results above:
-
 <screen><![CDATA[
-# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
-tsc hpet acpi_pm
-# echo acpi_pm > /sys/devices/system/clocksource/clocksource0/current_clocksource
-# pg_test_timing
-Per loop time including overhead: 722.92 ns
+Testing timing overhead for 3 seconds.
+Per loop time including overhead: 16.40 ns
 Histogram of timing durations:
-  < us   % of total      count
-     1     27.84870    1155682
-     2     72.05956    2990371
-     4      0.07810       3241
-     8      0.01357        563
-    16      0.00007          3
+   <= ns   % of total  running %      count
+       0       0.0000     0.0000          0
+       1       0.0000     0.0000          0
+       3       0.0000     0.0000          0
+       7       0.0000     0.0000          0
+      15       4.5452     4.5452    8313178
+      31      95.4527    99.9979  174581501
+      63       0.0001    99.9981        253
+     127       0.0001    99.9982        165
+     255       0.0000    99.9982         35
+     511       0.0000    99.9982          1
+    1023       0.0013    99.9994       2300
+    2047       0.0004    99.9998        690
+    4095       0.0000    99.9998          9
+    8191       0.0000    99.9998          8
+   16383       0.0002   100.0000        337
+   32767       0.0000   100.0000          2
+   65535       0.0000   100.0000          1
+
+Observed timing durations up to 99.9900%:
+      ns   % of total  running %      count
+      15       4.5452     4.5452    8313178
+      16      58.3785    62.9237  106773354
+      17      33.6840    96.6078   61607584
+      18       3.1151    99.7229    5697480
+      19       0.2638    99.9867     482570
+      20       0.0093    99.9960      17054
+...
+   38051       0.0000   100.0000          1
 ]]></screen>
   </para>

-  <para>
-   In this configuration, the sample <command>EXPLAIN ANALYZE</command> above
-   takes 115.9 ms.  That's 1061 ns of timing overhead, again a small multiple
-   of what's measured directly by this utility.  That much timing overhead
-   means the actual query itself is only taking a tiny fraction of the
-   accounted for time, most of it is being consumed in overhead instead.  In
-   this configuration, any <command>EXPLAIN ANALYZE</command> totals involving
-   many timed operations would be inflated significantly by timing overhead.
-  </para>
-
-  <para>
-   FreeBSD also allows changing the time source on the fly, and it logs
-   information about the timer selected during boot:
-
-<screen>
-# dmesg | grep "Timecounter"
-Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
-Timecounter "i8254" frequency 1193182 Hz quality 0
-Timecounters tick every 10.000 msec
-Timecounter "TSC" frequency 2531787134 Hz quality 800
-# sysctl kern.timecounter.hardware=TSC
-kern.timecounter.hardware: ACPI-fast -> TSC
-</screen>
-  </para>
-
-  <para>
-   Other systems may only allow setting the time source on boot.  On older
-   Linux systems the "clock" kernel setting is the only way to make this sort
-   of change.  And even on some more recent ones, the only option you'll see
-   for a clock source is "jiffies".  Jiffies are the older Linux software clock
-   implementation, which can have good resolution when it's backed by fast
-   enough timing hardware, as in this example:
-
-<screen><![CDATA[
-$ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
-jiffies
-$ dmesg | grep time.c
-time.c: Using 3.579545 MHz WALL PM GTOD PIT/TSC timer.
-time.c: Detected 2400.153 MHz processor.
-$ pg_test_timing
-Testing timing overhead for 3 seconds.
-Per timing duration including loop overhead: 97.75 ns
-Histogram of timing durations:
-  < us   % of total      count
-     1     90.23734   27694571
-     2      9.75277    2993204
-     4      0.00981       3010
-     8      0.00007         22
-    16      0.00000          1
-    32      0.00000          1
-]]></screen></para>
-
  </refsect2>
-
- <refsect2>
-  <title>Clock Hardware and Timing Accuracy</title>
-
-  <para>
-   Collecting accurate timing information is normally done on computers using
-   hardware clocks with various levels of accuracy.  With some hardware the
-   operating systems can pass the system clock time almost directly to
-   programs.  A system clock can also be derived from a chip that simply
-   provides timing interrupts, periodic ticks at some known time interval.  In
-   either case, operating system kernels provide a clock source that hides
-   these details.  But the accuracy of that clock source and how quickly it can
-   return results varies based on the underlying hardware.
-  </para>
-
-  <para>
-   Inaccurate time keeping can result in system instability.  Test any change
-   to the clock source very carefully.  Operating system defaults are sometimes
-   made to favor reliability over best accuracy. And if you are using a virtual
-   machine, look into the recommended time sources compatible with it.  Virtual
-   hardware faces additional difficulties when emulating timers, and there are
-   often per operating system settings suggested by vendors.
-  </para>
-
-  <para>
-   The Time Stamp Counter (TSC) clock source is the most accurate one available
-   on current generation CPUs. It's the preferred way to track the system time
-   when it's supported by the operating system and the TSC clock is
-   reliable. There are several ways that TSC can fail to provide an accurate
-   timing source, making it unreliable. Older systems can have a TSC clock that
-   varies based on the CPU temperature, making it unusable for timing. Trying
-   to use TSC on some older multicore CPUs can give a reported time that's
-   inconsistent among multiple cores. This can result in the time going
-   backwards, a problem this program checks for.  And even the newest systems
-   can fail to provide accurate TSC timing with very aggressive power saving
-   configurations.
-  </para>
-
-  <para>
-   Newer operating systems may check for the known TSC problems and switch to a
-   slower, more stable clock source when they are seen.  If your system
-   supports TSC time but doesn't default to that, it may be disabled for a good
-   reason.  And some operating systems may not detect all the possible problems
-   correctly, or will allow using TSC even in situations where it's known to be
-   inaccurate.
-  </para>
-
-  <para>
-   The High Precision Event Timer (HPET) is the preferred timer on systems
-   where it's available and TSC is not accurate.  The timer chip itself is
-   programmable to allow up to 100 nanosecond resolution, but you may not see
-   that much accuracy in your system clock.
-  </para>
-
-  <para>
-   Advanced Configuration and Power Interface (ACPI) provides a Power
-   Management (PM) Timer, which Linux refers to as the acpi_pm.  The clock
-   derived from acpi_pm will at best provide 300 nanosecond resolution.
-  </para>
-
-  <para>
-   Timers used on older PC hardware include the 8254 Programmable Interval
-   Timer (PIT), the real-time clock (RTC), the Advanced Programmable Interrupt
-   Controller (APIC) timer, and the Cyclone timer.  These timers aim for
-   millisecond resolution.
-  </para>
-  </refsect2>
  </refsect1>

  <refsect1>
@@ -298,6 +196,8 @@ Histogram of timing durations:

   <simplelist type="inline">
    <member><xref linkend="sql-explain"/></member>
+   <member><ulink url="https://wiki.postgresql.org/wiki/Pg_test_timing">Wiki
+   discussion about timing</ulink></member>
   </simplelist>
  </refsect1>
 </refentry>
diff --git a/src/bin/pg_test_timing/pg_test_timing.c b/src/bin/pg_test_timing/pg_test_timing.c
index ce7aad4b25a..64d080335eb 100644
--- a/src/bin/pg_test_timing/pg_test_timing.c
+++ b/src/bin/pg_test_timing/pg_test_timing.c
@@ -9,19 +9,30 @@
 #include <limits.h>

 #include "getopt_long.h"
+#include "port/pg_bitutils.h"
 #include "portability/instr_time.h"

 static const char *progname;

 static unsigned int test_duration = 3;
+static double max_rprct = 99.99;
+
+/* record duration in powers of 2 nanoseconds */
+static long long int histogram[32];
+
+/* record counts of first 1024 durations directly */
+#define NUM_DIRECT 1024
+static long long int direct_histogram[NUM_DIRECT];
+
+/* separately record highest observed duration */
+static int32 largest_diff;
+static long long int largest_diff_count;
+

 static void handle_args(int argc, char *argv[]);
 static uint64 test_timing(unsigned int duration);
 static void output(uint64 loop_count);

-/* record duration in powers of 2 microseconds */
-static long long int histogram[32];
-
 int
 main(int argc, char *argv[])
 {
@@ -44,6 +55,7 @@ handle_args(int argc, char *argv[])
 {
     static struct option long_options[] = {
         {"duration", required_argument, NULL, 'd'},
+        {"cutoff", required_argument, NULL, 'c'},
         {NULL, 0, NULL, 0}
     };

@@ -56,7 +68,7 @@ handle_args(int argc, char *argv[])
     {
         if (strcmp(argv[1], "--help") == 0 || strcmp(argv[1], "-?") == 0)
         {
-            printf(_("Usage: %s [-d DURATION]\n"), progname);
+            printf(_("Usage: %s [-d DURATION] [-c CUTOFF]\n"), progname);
             exit(0);
         }
         if (strcmp(argv[1], "--version") == 0 || strcmp(argv[1], "-V") == 0)
@@ -66,7 +78,7 @@ handle_args(int argc, char *argv[])
         }
     }

-    while ((option = getopt_long(argc, argv, "d:",
+    while ((option = getopt_long(argc, argv, "d:c:",
                                  long_options, &optindex)) != -1)
     {
         switch (option)
@@ -93,6 +105,26 @@ handle_args(int argc, char *argv[])
                 }
                 break;

+            case 'c':
+                errno = 0;
+                max_rprct = strtod(optarg, &endptr);
+
+                if (endptr == optarg || *endptr != '\0' || errno != 0)
+                {
+                    fprintf(stderr, _("%s: invalid argument for option %s\n"),
+                            progname, "--cutoff");
+                    fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname);
+                    exit(1);
+                }
+
+                if (max_rprct < 0 || max_rprct > 100)
+                {
+                    fprintf(stderr, _("%s: %s must be in range %u..%u\n"),
+                            progname, "--cutoff", 0, 100);
+                    exit(1);
+                }
+                break;
+
             default:
                 fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
                         progname);
@@ -111,7 +143,6 @@ handle_args(int argc, char *argv[])
         exit(1);
     }

-
     printf(ngettext("Testing timing overhead for %u second.\n",
                     "Testing timing overhead for %u seconds.\n",
                     test_duration),
@@ -130,19 +161,19 @@ test_timing(unsigned int duration)
                 end_time,
                 temp;

-    total_time = duration > 0 ? duration * INT64CONST(1000000) : 0;
+    total_time = duration > 0 ? duration * INT64CONST(1000000000) : 0;

     INSTR_TIME_SET_CURRENT(start_time);
-    cur = INSTR_TIME_GET_MICROSEC(start_time);
+    cur = INSTR_TIME_GET_NANOSEC(start_time);

     while (time_elapsed < total_time)
     {
         int32        diff,
-                    bits = 0;
+                    bits;

         prev = cur;
         INSTR_TIME_SET_CURRENT(temp);
-        cur = INSTR_TIME_GET_MICROSEC(temp);
+        cur = INSTR_TIME_GET_NANOSEC(temp);
         diff = cur - prev;

         /* Did time go backwards? */
@@ -154,18 +185,30 @@ test_timing(unsigned int duration)
         }

         /* What is the highest bit in the time diff? */
-        while (diff)
-        {
-            diff >>= 1;
-            bits++;
-        }
+        if (diff > 0)
+            bits = pg_leftmost_one_pos32(diff) + 1;
+        else
+            bits = 0;

         /* Update appropriate duration bucket */
         histogram[bits]++;

+        /* Update direct histogram of time diffs */
+        if (diff < NUM_DIRECT)
+            direct_histogram[diff]++;
+
+        /* Also track the largest observed duration, even if >= NUM_DIRECT */
+        if (diff > largest_diff)
+        {
+            largest_diff = diff;
+            largest_diff_count = 1;
+        }
+        else if (diff == largest_diff)
+            largest_diff_count++;
+
         loop_count++;
         INSTR_TIME_SUBTRACT(temp, start_time);
-        time_elapsed = INSTR_TIME_GET_MICROSEC(temp);
+        time_elapsed = INSTR_TIME_GET_NANOSEC(temp);
     }

     INSTR_TIME_SET_CURRENT(end_time);
@@ -181,28 +224,95 @@ test_timing(unsigned int duration)
 static void
 output(uint64 loop_count)
 {
-    int64        max_bit = 31,
-                i;
-    char       *header1 = _("< us");
-    char       *header2 = /* xgettext:no-c-format */ _("% of total");
-    char       *header3 = _("count");
+    int            max_bit = 31;
+    const char *header1 = _("<= ns");
+    const char *header1b = _("ns");
+    const char *header2 = /* xgettext:no-c-format */ _("% of total");
+    const char *header3 = /* xgettext:no-c-format */ _("running %");
+    const char *header4 = _("count");
     int            len1 = strlen(header1);
     int            len2 = strlen(header2);
     int            len3 = strlen(header3);
+    int            len4 = strlen(header4);
+    double        rprct;
+    bool        stopped = false;

     /* find highest bit value */
     while (max_bit > 0 && histogram[max_bit] == 0)
         max_bit--;

+    /* set minimum column widths */
+    len1 = Max(8, len1);
+    len2 = Max(10, len2);
+    len3 = Max(10, len3);
+    len4 = Max(10, len4);
+
     printf(_("Histogram of timing durations:\n"));
-    printf("%*s   %*s %*s\n",
-           Max(6, len1), header1,
-           Max(10, len2), header2,
-           Max(10, len3), header3);
-
-    for (i = 0; i <= max_bit; i++)
-        printf("%*ld    %*.5f %*lld\n",
-               Max(6, len1), 1l << i,
-               Max(10, len2) - 1, (double) histogram[i] * 100 / loop_count,
-               Max(10, len3), histogram[i]);
+    printf("%*s   %*s %*s %*s\n",
+           len1, header1,
+           len2, header2,
+           len3, header3,
+           len4, header4);
+
+    rprct = 0;
+    for (int i = 0; i <= max_bit; i++)
+    {
+        double        prct = (double) histogram[i] * 100 / loop_count;
+
+        rprct += prct;
+        printf("%*ld   %*.4f %*.4f %*lld\n",
+               len1, (1L << i) - 1,
+               len2, prct,
+               len3, rprct,
+               len4, histogram[i]);
+    }
+
+    printf(_("\nObserved timing durations up to %.4f%%:\n"), max_rprct);
+    printf("%*s   %*s %*s %*s\n",
+           len1, header1b,
+           len2, header2,
+           len3, header3,
+           len4, header4);
+
+    rprct = 0;
+    for (int i = 0; i < NUM_DIRECT; i++)
+    {
+        if (direct_histogram[i])
+        {
+            double        prct = (double) direct_histogram[i] * 100 / loop_count;
+            bool        print_it = !stopped;
+
+            rprct += prct;
+
+            /* if largest diff is < NUM_DIRECT, be sure we print it */
+            if (i == largest_diff)
+            {
+                if (stopped)
+                    printf("...\n");
+                print_it = true;
+            }
+
+            if (print_it)
+                printf("%*d   %*.4f %*.4f %*lld\n",
+                       len1, i,
+                       len2, prct,
+                       len3, rprct,
+                       len4, direct_histogram[i]);
+            if (rprct >= max_rprct)
+                stopped = true;
+        }
+    }
+
+    /* print largest diff when it's outside the array range */
+    if (largest_diff >= NUM_DIRECT)
+    {
+        double        prct = (double) largest_diff_count * 100 / loop_count;
+
+        printf("...\n");
+        printf("%*d   %*.4f %*.4f %*lld\n",
+               len1, largest_diff,
+               len2, prct,
+               len3, 100.0,
+               len4, largest_diff_count);
+    }
 }
diff --git a/src/bin/pg_test_timing/t/001_basic.pl b/src/bin/pg_test_timing/t/001_basic.pl
index 6554cd981af..9912acc052a 100644
--- a/src/bin/pg_test_timing/t/001_basic.pl
+++ b/src/bin/pg_test_timing/t/001_basic.pl
@@ -25,5 +25,22 @@ command_fails_like(
     [ 'pg_test_timing', '--duration' => '0' ],
     qr/\Qpg_test_timing: --duration must be in range 1..4294967295\E/,
     'pg_test_timing: --duration must be in range');
+command_fails_like(
+    [ 'pg_test_timing', '--cutoff' => '101' ],
+    qr/\Qpg_test_timing: --cutoff must be in range 0..100\E/,
+    'pg_test_timing: --cutoff must be in range');
+
+#########################################
+# We obviously can't check for specific output, but we can
+# do a simple run and make sure it produces something.
+
+command_like(
+    [ 'pg_test_timing', '--duration' => '1' ],
+    qr/
+\QTesting timing overhead for 1 second.\E.*
+\QHistogram of timing durations:\E.*
+\QObserved timing durations up to 99.9900%:\E
+/sx,
+    'pg_test_timing: sanity check');

 done_testing();

Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
On Mon, Jul 7, 2025 at 11:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>

> > Also added a flag to select number of direct values to show
>
> Hmm ... I agree with having a way to control the length of that output,
> but I don't think that specifying a count is the most useful way to
> do it.  Particularly with a default of only 10, it seems way too
> likely to cut off important information.
>
> What do you think of instead specifying the limit as the maximum
> running-percentage to print, with a default of say 99.99%?  That
> gives me results like

I agree that percentage covered is a much better metric indeed.
And I am equally ok with a default of either 99.9% or 99.99%.

I briefly thought of it but decided a simple count is simpler to
explain, especially for some potential corner cases of % .
But as pg_test_timing is not part of the server we really do not need
to worry about rare edge cases.


> Observed timing durations up to 99.9900%:
>       ns   % of total  running %      count
>       15       4.5452     4.5452    8313178
>       16      58.3785    62.9237  106773354
>       17      33.6840    96.6078   61607584
>       18       3.1151    99.7229    5697480
>       19       0.2638    99.9867     482570
>       20       0.0093    99.9960      17054
>
> In the attached I also made it print the largest observed
> duration, which seems like it might be useful information.

Yes, a useful piece of information indeed.

> As previously threatened, I also added a test case to
> improve the code coverage.

Thanks!



Re: What is a typical precision of gettimeofday()?

From
Vik Fearing
Date:
On 19/03/2024 09:28, Peter Eisentraut wrote:
> Does anyone know what precision we can expect from gettimeofday()?  
> Can we expect the full microsecond precision usually? 


Having read through this thread, is there any chance at all that we 
might be able to implement feature F555, “Enhanced seconds precision”?


I feel we may have dug ourselves into a hole with integer timestamps 
that span ridiculously long ranges.

-- 

Vik Fearing




Re: What is a typical precision of gettimeofday()?

From
Vik Fearing
Date:
On 08/07/2025 02:37, Tom Lane wrote:
> Vik Fearing <vik@postgresfriends.org> writes:
>> Having read through this thread, is there any chance at all that we
>> might be able to implement feature F555, “Enhanced seconds precision”?
> Don't see how we could do that without an on-disk compatibility break
> for timestamps.


Yeah, I don't see how either.


> (If we did decide to break compatibility, my own first priority
> would be to make timestamptz actually include a timezone ...)


That comes with its own set of issues, but I don't want to hijack this 
thread more than I already have.

-- 

Vik Fearing




Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
BTW, returning to the original topic of this thread:

The new exact-delays table from pg_test_timing is really quite
informative.  For example, on my M4 Macbook:

Observed timing durations up to 99.9900%:
      ns   % of total  running %      count
       0      62.2124    62.2124  118127987
      41      12.5826    74.7951   23891661
      42      25.1653    99.9604   47783489
      83       0.0194    99.9797      36744
      84       0.0096    99.9893      18193
     125       0.0020    99.9913       3784
...
   31042       0.0000   100.0000          1

The fact that the clock tick is about 40ns is extremely 
obvious in this presentation.

Even more interesting is what I got from an ancient PPC Macbook
(mamba's host, running NetBSD):

Testing timing overhead for 3 seconds.
Per loop time including overhead: 731.26 ns
...
Observed timing durations up to 99.9900%:
      ns   % of total  running %      count
     705      39.9162    39.9162    1637570
     706      17.6040    57.5203     722208
     759      18.6797    76.2000     766337
     760      23.7851    99.9851     975787
     813       0.0002    99.9853          9
     814       0.0004    99.9857         17
     868       0.0001    99.9858          4
     922       0.0001    99.9859          3
...
  564950       0.0000   100.0000          1

I had previously reported that that machine had microsecond
timing precision, but this shows that the real problem is that
it takes most of a microsecond to go 'round the timing loop.
It seems clear that the system clock ticks about every 50ns,
even on this decades-old hardware.

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
Andrey Borodin
Date:

> On 8 Jul 2025, at 23:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> The fact that the clock tick is about 40ns is extremely
> obvious in this presentation.

FWIW while working on UUID v7 Masahiko found that if we try to read real time with clock_gettime(CLOCK_REALTIME,)
measurementis truncated to microseconds. 


Best regards, Andrey Borodin.


Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Andrey Borodin <x4mmm@yandex-team.ru> writes:
>> On 8 Jul 2025, at 23:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> The fact that the clock tick is about 40ns is extremely
>> obvious in this presentation.

> FWIW while working on UUID v7 Masahiko found that if we try to read real time with clock_gettime(CLOCK_REALTIME,)
measurementis truncated to microseconds. 

Yeah.  Bear in mind that instr_time.h uses CLOCK_MONOTONIC_RAW on
macOS, so that's what we're looking at here.

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
On Tue, Jul 8, 2025 at 8:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> BTW, returning to the original topic of this thread:
>
> The new exact-delays table from pg_test_timing is really quite
> informative.

Maybe we should collect some of it in the PostgreSQL Wiki for easy reference ?

I had some interesting results with some RISC-V SBC which were similar
to ARM but
seemed to indicate that the chosen "CPU CLOCK"  used by the
rtdsc-equivalent instruction was counting in exact multiples of a
nanosecond

And "the rtdsc-equivalent instruction" on both ARM and RISC-V is
reading a special register which exposes the faked "clock counter".


> For example, on my M4 Macbook:
>
> Observed timing durations up to 99.9900%:
>       ns   % of total  running %      count
>        0      62.2124    62.2124  118127987
>       41      12.5826    74.7951   23891661
>       42      25.1653    99.9604   47783489
>       83       0.0194    99.9797      36744
>       84       0.0096    99.9893      18193
>      125       0.0020    99.9913       3784
> ...
>    31042       0.0000   100.0000          1
>
> The fact that the clock tick is about 40ns is extremely
> obvious in this presentation.
>
> Even more interesting is what I got from an ancient PPC Macbook
> (mamba's host, running NetBSD):
>
> Testing timing overhead for 3 seconds.
> Per loop time including overhead: 731.26 ns
> ...
> Observed timing durations up to 99.9900%:
>       ns   % of total  running %      count
>      705      39.9162    39.9162    1637570
>      706      17.6040    57.5203     722208
>      759      18.6797    76.2000     766337
>      760      23.7851    99.9851     975787
>      813       0.0002    99.9853          9
>      814       0.0004    99.9857         17
>      868       0.0001    99.9858          4
>      922       0.0001    99.9859          3

Do we have a fencepost error in the limit code so that it stops before
printing out the 99.9900% limit row ?

The docs in your latest patch indicated that it prints out the row >=  99.9900%

---
Hannu



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> On Tue, Jul 8, 2025 at 8:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Even more interesting is what I got from an ancient PPC Macbook
>> (mamba's host, running NetBSD):
>> 
>> Testing timing overhead for 3 seconds.
>> Per loop time including overhead: 731.26 ns
>> ...
>> Observed timing durations up to 99.9900%:
>> ns   % of total  running %      count
>> 705      39.9162    39.9162    1637570
>> 706      17.6040    57.5203     722208
>> 759      18.6797    76.2000     766337
>> 760      23.7851    99.9851     975787
>> 813       0.0002    99.9853          9
>> 814       0.0004    99.9857         17
>> 868       0.0001    99.9858          4
>> 922       0.0001    99.9859          3

> Do we have a fencepost error in the limit code so that it stops before
> printing out the 99.9900% limit row ?

No, I think what's happening there is that we get to NUM_DIRECT before
reaching the 99.99% mark.  Running the test a bit longer, I do get a
hit at the next plausible 50ns step:

$ ./pg_test_timing -d 10
Testing timing overhead for 10 seconds.
Per loop time including overhead: 729.79 ns
Histogram of timing durations:
   <= ns   % of total  running %      count
       0       0.0000     0.0000          0
       1       0.0000     0.0000          0
       3       0.0000     0.0000          0
       7       0.0000     0.0000          0
      15       0.0000     0.0000          0
      31       0.0000     0.0000          0
      63       0.0000     0.0000          0
     127       0.0000     0.0000          0
     255       0.0000     0.0000          0
     511       0.0000     0.0000          0
    1023      99.9879    99.9879   13700887
    2047       0.0000    99.9880          2
    4095       0.0063    99.9942        859
    8191       0.0019    99.9962        267
   16383       0.0017    99.9978        227
   32767       0.0012    99.9990        166
   65535       0.0001    99.9992         16
  131071       0.0007    99.9998         90
  262143       0.0000    99.9998          5
  524287       0.0001    99.9999         11
 1048575       0.0001   100.0000         10

Observed timing durations up to 99.9900%:
      ns   % of total  running %      count
     705      40.7623    40.7623    5585475
     706      17.9732    58.7355    2462787
     759      18.1392    76.8747    2485525
     760      23.1129    99.9876    3167060
     813       0.0000    99.9877          5
     814       0.0002    99.9878         23
     868       0.0000    99.9879          5
     869       0.0000    99.9879          1
     922       0.0000    99.9879          3
     923       0.0000    99.9879          2
     976       0.0000    99.9879          1
...
  625444       0.0000   100.0000          1

amd the next step after that would be 1026 ns which is past
the NUM_DIRECT array size.

I considered raising NUM_DIRECT some more, but I think it'd be
overkill.  This machine is surely an order of magnitude slower
than anything anyone would consider of practical interest today.
Just for fun, though, I tried a run with NUM_DIRECT = 10240:

$ ./pg_test_timing -d 10
Testing timing overhead for 10 seconds.
Per loop time including overhead: 729.23 ns
Histogram of timing durations:
   <= ns   % of total  running %      count
       0       0.0000     0.0000          0
       1       0.0000     0.0000          0
       3       0.0000     0.0000          0
       7       0.0000     0.0000          0
      15       0.0000     0.0000          0
      31       0.0000     0.0000          0
      63       0.0000     0.0000          0
     127       0.0000     0.0000          0
     255       0.0000     0.0000          0
     511       0.0000     0.0000          0
    1023      99.9878    99.9878   13711494
    2047       0.0000    99.9878          5
    4095       0.0062    99.9941        854
    8191       0.0021    99.9962        289
   16383       0.0017    99.9979        236
   32767       0.0011    99.9990        153
   65535       0.0002    99.9992         24
  131071       0.0006    99.9998         85
  262143       0.0001    99.9999          8
  524287       0.0001    99.9999          9
 1048575       0.0001   100.0000          7
 2097151       0.0000   100.0000          0
 4194303       0.0000   100.0000          0
 8388607       0.0000   100.0000          0
16777215       0.0000   100.0000          0
33554431       0.0000   100.0000          1
67108863       0.0000   100.0000          1

Observed timing durations up to 99.9900%:
      ns   % of total  running %      count
     705      50.3534    50.3534    6905051
     706      22.1988    72.5522    3044153
     759      12.0613    84.6135    1653990
     760      15.3732    99.9867    2108150
     813       0.0000    99.9867          2
     814       0.0002    99.9869         27
     868       0.0006    99.9875         85
     869       0.0000    99.9876          2
     922       0.0001    99.9877         20
     923       0.0001    99.9878          9
     976       0.0000    99.9878          2
     977       0.0000    99.9878          3
    1031       0.0000    99.9878          4
    1248       0.0000    99.9878          1
    2550       0.0002    99.9880         26
    2604       0.0008    99.9889        114
    2605       0.0002    99.9891         30
    2658       0.0005    99.9896         75
    2659       0.0004    99.9901         61
...
65362171       0.0000   100.0000          1

This is probably showing something interesting about the
behavior of NetBSD's scheduler, but I dunno what exactly.

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> On Tue, Jul 8, 2025 at 10:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> No, I think what's happening there is that we get to NUM_DIRECT before
>> reaching the 99.99% mark.

> Makes sense.
> Should we change the header to something like
> "Showing values covering up to 99.9900% of total observed timing
> durations below 1 us. "

I think that'd just confuse people more not less.

> And maybe 10,001 would be a better value for collection, especially on
> older machines which could in fact have some wose timer resolution ?

I don't have any objection to boosting NUM_DIRECT to 10K or so.
It will cause the stats display loop to take microscopically longer,
but that shouldn't matter, even on slow machines.

One other thing that bothers me as I look at the output is

    Per loop time including overhead: 731.26 ns

That's stated in a way that makes it sound like that is a
very solid number, when in fact it's just the average.
We see from these test cases that there are frequently
a few outliers that are far from the average.  I'm tempted
to rephrase as

    Average loop time including overhead: 731.26 ns

or some variant of that.  Thoughts?

            regards, tom lane



Re: What is a typical precision of gettimeofday()?

From
Laurenz Albe
Date:
On Tue, 2025-07-08 at 18:17 -0400, Tom Lane wrote:
> One other thing that bothers me as I look at the output is
>
>     Per loop time including overhead: 731.26 ns
>
> That's stated in a way that makes it sound like that is a
> very solid number, when in fact it's just the average.
> We see from these test cases that there are frequently
> a few outliers that are far from the average.  I'm tempted
> to rephrase as
>
>     Average loop time including overhead: 731.26 ns
>
> or some variant of that.  Thoughts?

I think that is a good idea.

Yours,
Laurenz Albe



Re: What is a typical precision of gettimeofday()?

From
Hannu Krosing
Date:
Yes, this should say average

On Wed, Jul 9, 2025 at 8:42 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:
>
> On Tue, 2025-07-08 at 18:17 -0400, Tom Lane wrote:
> > One other thing that bothers me as I look at the output is
> >
> >       Per loop time including overhead: 731.26 ns
> >
> > That's stated in a way that makes it sound like that is a
> > very solid number, when in fact it's just the average.
> > We see from these test cases that there are frequently
> > a few outliers that are far from the average.  I'm tempted
> > to rephrase as
> >
> >       Average loop time including overhead: 731.26 ns
> >
> > or some variant of that.  Thoughts?
>
> I think that is a good idea.
>
> Yours,
> Laurenz Albe



Re: What is a typical precision of gettimeofday()?

From
Tom Lane
Date:
Hannu Krosing <hannuk@google.com> writes:
> Yes, this should say average

OK, I tweaked that and increased NUM_DIRECT to 10000,
along with a couple of nitpicky code improvements.

            regards, tom lane