Just idea; as an another option, how about blocking new requests to the target process (e.g., causing them to fail with an error or returning NULL with a warning) if a previous request is still pending? Users can simply retry the request if it fails. IMO failing quickly seems preferable to getting stuck for a while in cases with concurrent requests.
Thank you for the suggestion. I agree that it is better to fail early and avoid waiting for a timeout in such cases. I will add a "pending request" tracker for this in shared memory. This approach will help prevent sending a concurrent request if a request for the same backend is still being processed.
IMO, one downside of throwing an error in such cases is that the users might wonder if they need to take a corrective action, even though the issue is actually going to solve itself and they just need to retry. Therefore, issuing a warning or displaying previously updated statistics might be a better alternative to throwing an error.