Re: Another bug introduced by fastpath patch - Mailing list pgsql-hackers
From | Tom Lane |
---|---|
Subject | Re: Another bug introduced by fastpath patch |
Date | |
Msg-id | 13105.1385601701@sss.pgh.pa.us Whole thread Raw |
In response to | Re: Another bug introduced by fastpath patch (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: Another bug introduced by fastpath patch
|
List | pgsql-hackers |
I wrote: > We could still do this if we were willing to actually reject requests > for session level locks on fast-path-eligible lock types. At the moment > that costs us nothing really. If anyone ever did have a use case, we > could consider adding the extra logic to support it. Nope, that *still* doesn't work, because in non-allLocks mode the main loop won't clear any locks that have been promoted from fastpath to regular. Sigh. For the moment I'm proposing that we just re-fetch the list header after acquiring the lock. The attached patch is slightly more verbose than that, because I took the opportunity to reformulate the while() loop as a for() loop and thereby eliminate some goto's. regards, tom lane diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c index 072b276..75df457 100644 *** a/src/backend/storage/lmgr/lock.c --- b/src/backend/storage/lmgr/lock.c *************** LockReleaseAll(LOCKMETHODID lockmethodid *** 2098,2116 **** { LWLockId partitionLock = FirstLockMgrLock + partition; SHM_QUEUE *procLocks = &(MyProc->myProcLocks[partition]); ! proclock = (PROCLOCK *) SHMQueueNext(procLocks, procLocks, ! offsetof(PROCLOCK, procLink)); ! ! if (!proclock) continue; /* needn't examine this partition */ LWLockAcquire(partitionLock, LW_EXCLUSIVE); ! while (proclock) { bool wakeupNeeded = false; - PROCLOCK *nextplock; /* Get link first, since we may unlink/delete this proclock */ nextplock = (PROCLOCK *) --- 2098,2134 ---- { LWLockId partitionLock = FirstLockMgrLock + partition; SHM_QUEUE *procLocks = &(MyProc->myProcLocks[partition]); + PROCLOCK *nextplock; ! /* ! * If the proclock list for this partition is empty, we can skip ! * acquiring the partition lock. This optimization is trickier than ! * it looks, because another backend could be in process of adding ! * something to our proclock list due to promoting one of our ! * fast-path locks. However, any such lock must be one that we ! * decided not to delete above, so it's okay to skip it again now; ! * we'd just decide not to delete it again. We must, however, be ! * careful to re-fetch the list header once we've acquired the ! * partition lock, to be sure we see the up to date version. ! * ! * XXX This argument assumes that the locallock table correctly ! * represents all of our fast-path locks. While allLocks mode ! * guarantees to clean up all of our normal locks regardless of the ! * locallock situation, we lose that guarantee for fast-path locks. ! * This is not ideal. ! */ ! if (SHMQueueNext(procLocks, procLocks, ! offsetof(PROCLOCK, procLink)) == NULL) continue; /* needn't examine this partition */ LWLockAcquire(partitionLock, LW_EXCLUSIVE); ! for (proclock = (PROCLOCK *) SHMQueueNext(procLocks, procLocks, ! offsetof(PROCLOCK, procLink)); ! proclock; ! proclock = nextplock) { bool wakeupNeeded = false; /* Get link first, since we may unlink/delete this proclock */ nextplock = (PROCLOCK *) *************** LockReleaseAll(LOCKMETHODID lockmethodid *** 2123,2129 **** /* Ignore items that are not of the lockmethod to be removed */ if (LOCK_LOCKMETHOD(*lock) != lockmethodid) ! goto next_item; /* * In allLocks mode, force release of all locks even if locallock --- 2141,2147 ---- /* Ignore items that are not of the lockmethod to be removed */ if (LOCK_LOCKMETHOD(*lock) != lockmethodid) ! continue; /* * In allLocks mode, force release of all locks even if locallock *************** LockReleaseAll(LOCKMETHODID lockmethodid *** 2139,2145 **** * holdMask == 0 and are therefore recyclable */ if (proclock->releaseMask == 0 && proclock->holdMask != 0) ! goto next_item; PROCLOCK_PRINT("LockReleaseAll", proclock); LOCK_PRINT("LockReleaseAll", lock, 0); --- 2157,2163 ---- * holdMask == 0 and are therefore recyclable */ if (proclock->releaseMask == 0 && proclock->holdMask != 0) ! continue; PROCLOCK_PRINT("LockReleaseAll", proclock); LOCK_PRINT("LockReleaseAll", lock, 0); *************** LockReleaseAll(LOCKMETHODID lockmethodid *** 2168,2176 **** lockMethodTable, LockTagHashCode(&lock->tag), wakeupNeeded); - - next_item: - proclock = nextplock; } /* loop over PROCLOCKs within this partition */ LWLockRelease(partitionLock); --- 2186,2191 ---- *************** PostPrepare_Locks(TransactionId xid) *** 3142,3160 **** { LWLockId partitionLock = FirstLockMgrLock + partition; SHM_QUEUE *procLocks = &(MyProc->myProcLocks[partition]); ! proclock = (PROCLOCK *) SHMQueueNext(procLocks, procLocks, ! offsetof(PROCLOCK, procLink)); ! ! if (!proclock) continue; /* needn't examine this partition */ LWLockAcquire(partitionLock, LW_EXCLUSIVE); ! while (proclock) { - PROCLOCK *nextplock; - /* Get link first, since we may unlink/relink this proclock */ nextplock = (PROCLOCK *) SHMQueueNext(procLocks, &proclock->procLink, --- 3157,3183 ---- { LWLockId partitionLock = FirstLockMgrLock + partition; SHM_QUEUE *procLocks = &(MyProc->myProcLocks[partition]); + PROCLOCK *nextplock; ! /* ! * If the proclock list for this partition is empty, we can skip ! * acquiring the partition lock. This optimization is safer than the ! * situation in LockReleaseAll, because we got rid of any fast-path ! * locks during AtPrepare_Locks, so there cannot be any case where ! * another backend is adding something to our lists now. For safety, ! * though, we code this the same way as in LockReleaseAll. ! */ ! if (SHMQueueNext(procLocks, procLocks, ! offsetof(PROCLOCK, procLink)) == NULL) continue; /* needn't examine this partition */ LWLockAcquire(partitionLock, LW_EXCLUSIVE); ! for (proclock = (PROCLOCK *) SHMQueueNext(procLocks, procLocks, ! offsetof(PROCLOCK, procLink)); ! proclock; ! proclock = nextplock) { /* Get link first, since we may unlink/relink this proclock */ nextplock = (PROCLOCK *) SHMQueueNext(procLocks, &proclock->procLink, *************** PostPrepare_Locks(TransactionId xid) *** 3166,3172 **** /* Ignore VXID locks */ if (lock->tag.locktag_type == LOCKTAG_VIRTUALTRANSACTION) ! goto next_item; PROCLOCK_PRINT("PostPrepare_Locks", proclock); LOCK_PRINT("PostPrepare_Locks", lock, 0); --- 3189,3195 ---- /* Ignore VXID locks */ if (lock->tag.locktag_type == LOCKTAG_VIRTUALTRANSACTION) ! continue; PROCLOCK_PRINT("PostPrepare_Locks", proclock); LOCK_PRINT("PostPrepare_Locks", lock, 0); *************** PostPrepare_Locks(TransactionId xid) *** 3177,3183 **** /* Ignore it if nothing to release (must be a session lock) */ if (proclock->releaseMask == 0) ! goto next_item; /* Else we should be releasing all locks */ if (proclock->releaseMask != proclock->holdMask) --- 3200,3206 ---- /* Ignore it if nothing to release (must be a session lock) */ if (proclock->releaseMask == 0) ! continue; /* Else we should be releasing all locks */ if (proclock->releaseMask != proclock->holdMask) *************** PostPrepare_Locks(TransactionId xid) *** 3219,3227 **** &proclock->procLink); PROCLOCK_PRINT("PostPrepare_Locks: updated", proclock); - - next_item: - proclock = nextplock; } /* loop over PROCLOCKs within this partition */ LWLockRelease(partitionLock); --- 3242,3247 ----
pgsql-hackers by date: