Rate Limiting
The PassEntry API implements rate limiting to ensure service stability and fair usage across all users. Rate limits are applied separately to each combination of endpoint and HTTP method. For example, you can make up to 10 GET requests per second to /passes
and simultaneously 10 POST requests per second to the same endpoint.
Rate Limit Implementation
We use a sliding window algorithm to track and enforce rate limits. While the rate limit is 10 requests per second, we recommend implementing a buffer in your applications to account for network latency and potential timing variations.
Best practice is to limit your requests to 9 requests per second, leaving some buffer for retries and network delays. This helps prevent edge cases where requests might be counted in different time windows.
Response Headers
The API includes rate limit information in the response headers:
Header | Description |
---|---|
x-rate-limit-limit | Maximum number of requests allowed per second (10) |
x-rate-limit-remaining | Number of requests remaining in the current window |
x-rate-limit-reset | Unix timestamp when the rate limit window resets |
x-rate-limit-url | The endpoint path this rate limit applies to |
Rate Limit Exceeded
When you exceed the rate limit, the API will respond with a 429 Too Many Requests
status code and include an additional header:
Header | Description |
---|---|
x-rate-limit-retry-after | Time in seconds (decimal) to wait before retrying |
Example Rate Limited Response
Handling Rate Limits
Best Practices
- Implement exponential backoff with jitter for retries
- Cache responses when possible to reduce API calls
- Use bulk operations where available instead of multiple single requests
- Implement retries if the request fails due to rate limiting
Was this page helpful?