From 4a84bb6980fa05ae3bde84179b8427799d6ee538 Mon Sep 17 00:00:00 2001 From: John Naylor Date: Fri, 3 Feb 2023 16:54:39 +0700 Subject: [PATCH v4 4/4] Rationalize platform support in AllocSetFreeIndex Previously, we directly tested for HAVE__BUILTIN_CLZ and copied the internals of pg_leftmost_one_pos32(), with a special fallback that does less work than the general fallback for that function. In the wake of , we have MSVC support for bitscan intrinsics, and a more general way to test for them, so just call pg_leftmost_one_pos32() directly. On gcc at least, there is no difference in the binary, showing that compilers can do constant folding across the boundary of an inlined function. --- src/backend/utils/mmgr/aset.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c index 740729b5d0..d12977b7b5 100644 --- a/src/backend/utils/mmgr/aset.c +++ b/src/backend/utils/mmgr/aset.c @@ -289,7 +289,7 @@ AllocSetFreeIndex(Size size) * or equivalently * pg_leftmost_one_pos32(size - 1) - ALLOC_MINBITS + 1 * - * However, rather than just calling that function, we duplicate the + * However, for platforms without intrinsic support, we duplicate the * logic here, allowing an additional optimization. It's reasonable * to assume that ALLOC_CHUNK_LIMIT fits in 16 bits, so we can unroll * the byte-at-a-time loop in pg_leftmost_one_pos32 and just handle @@ -299,8 +299,8 @@ AllocSetFreeIndex(Size size) * much trouble. *---------- */ -#ifdef HAVE__BUILTIN_CLZ - idx = 31 - __builtin_clz((uint32) size - 1) - ALLOC_MINBITS + 1; +#ifdef HAVE_BITSCAN_REVERSE + idx = pg_leftmost_one_pos32(size - 1) - ALLOC_MINBITS + 1; #else uint32 t, tsize; -- 2.39.1