Introduction

Tyk’s Request Throttling feature provides a mechanism to manage traffic spikes by queuing and automatically retrying client requests that exceed rate limits, rather than immediately rejecting them. This helps protect upstream services from sudden bursts and improves the resilience of API interactions during temporary congestion.

Quick Start

Overview

In this tutorial, we will configure Request Throttling on a Tyk Security Policy to protect a backend service from sudden traffic spikes. We’ll start by defining a basic rate limit on a policy, then enable throttling with specific retry settings to handle bursts exceeding that limit, associate a key with the policy, and finally test the behaviour using simulated traffic. This guide primarily uses the Tyk Dashboard for configuration.

Prerequisites

  • Working Tyk Environment: You need access to a running Tyk instance that includes both the Tyk Gateway and Tyk Dashboard components. For setup instructions using Docker, please refer to the Tyk Quick Start.
  • Curl, seq and xargs: These tools will be used for testing.

Instructions

Create an API

  1. Create an API:
    1. Log in to your Tyk Dashboard.
    2. Navigate to API Management > APIs
    3. Click Add New API
    4. Click Import
    5. Select Import Type as Tyk API
    6. Copy the below Tyk OAS definition in the text box and click Import API to create an API

Configure Policy and Rate Limit

2. Create and Configure an Security Policy with Rate Limiting:
policy with throttling configured
  1. Associate an Access Key with the Policy:
  2. Test Rate Limit So far, we’ve created a policy for an API definition and created a key that complies with that policy. Before enabling throttling, let’s observe the standard rate limiting behaviour. We’ll send 10 requests in parallel using xargs to simulate a burst that exceeds our configured limit (5 requests per 10 seconds).
    1. Open your terminal.
    2. Execute the following command, replacing <replace-with-key-id> with the API Key ID you saved earlier:
      seq 10 | xargs -n1 -P10 -I {} bash -c 'curl -s -I -H "Authorization: <replace-with-key-id>" http://tyk-gateway.localhost:8080/request-throttling-test/ | head -n 1'
      
    3. Expected Observation: You should see some requests succeed with HTTP/1.1 200 OK, and other requests failing with HTTP/1.1 429 Too Many Requests as the rate limit is immediately enforced. The order of 200s vs 429s might vary depending upon the processing time, but you will see immediate rejections once the limit is hit.
    Sample Output (Illustrative):
    HTTP/1.1 429 Too Many Requests
    HTTP/1.1 429 Too Many Requests
    HTTP/1.1 429 Too Many Requests
    HTTP/1.1 429 Too Many Requests
    HTTP/1.1 429 Too Many Requests
    HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    

Configure Throttling

Now that the policy enforces a basic rate limit, we will enable and configure Request Throttling. This adds the queue-and-retry behavior for requests that exceed the limit, preventing immediate rejection and helping to smooth out traffic spikes.
  1. Configure Request Throttling by Updating the Security Policy
    1. Navigate to API Security > Policies in the Tyk Dashboard sidebar
    2. Click on the Request Throttling Policy
    3. Under the 1. Access Rights tab:
      • In the Global Limits and Quota section
      • Set the following values for Throttling
      • Uncheck the Disable Throttling checkbox
      • Enter 3 into the Throttle retries (or connection attempts) field
      • Enter 5 into the Per (seconds): field
    4. Click the Update button
    5. A pop-up window will appear to confirm the changes. Click Update to close the pop-up

Testing

  1. Test Request Throttling
    1. Repeat the Test: Open your terminal and execute the exact same command as in step 4:
      seq 10 | xargs -n1 -P10 -I {} bash -c 'curl -s -I -H "Authorization: <replace-with-key-id>" http://tyk-gateway.localhost:8080/request-throttling-test/ | head -n 1'
      
    2. Expected Observation:
      • You will still see the first ~5 requests return HTTP/1.1 200 OK quickly
      • Critically, the subsequent requests (6 through 10) will not immediately return 429. Instead, you should observe a delay before their status lines appear
      • After the delay (throttle_interval), Tyk will retry the queued requests. Some might now succeed (return 200 OK) if the rate limit window allows
      • If a request is retried throttle_retry_limit (3) times and still encounters the rate limit, then it will finally return HTTP/1.1 429 Too Many Requests
      • Overall, you might see more 200 OK responses compared to the previous test, and any 429 responses will appear significantly later
    Sample Output (Illustrative - timing is key):
    HTTP/1.1 200 OK  # Appears quickly
    HTTP/1.1 200 OK  # Appears quickly
    HTTP/1.1 200 OK  # Appears quickly
    HTTP/1.1 200 OK  # Appears quickly
    HTTP/1.1 200 OK  # Appears quickly
    # --- Noticeable pause here ---
    HTTP/1.1 200 OK
    # --- Noticeable pause here ---
    HTTP/1.1 200 OK
    # --- Noticeable pause here ---
    HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    
    (The exact mix of 200s and 429s on the delayed requests depends heavily on timing relative to the 10-second rate limit window reset and the retry attempts).
This comparison clearly shows how Request Throttling changes the behaviour from immediate rejection to queued retries, smoothing the traffic flow and potentially allowing more requests to succeed during bursts.

Configuration Options

Request Throttling is configured within Tyk Security Policies or directly on individual Access Keys. The configuration involves setting two specific fields:
  • throttle_interval: Defines the wait time (in seconds) between retry attempts for a queued request. (Note: Do not set it to 0. If you do, no delay is applied, and the request is immediately retried. This will creates a “busy waiting” scenario that consumes more resources than a positive interval value)
  • throttle_retry_limit: Sets the maximum number of retry attempts before the request is rejected. (Note: Do not set it to 0. Setting it to 0 means that there will be no throttling on the request)
To enable throttling, both fields must be set to a value greater than 0.

Disable throttling

The default value is -1 and means it is disabled by default. Setting throttle_interval and throttle_retry_limit values to any number smaller than 0, to ensure the feature is diabled. You can configure these settings using either the Tyk Dashboard UI or the Tyk Dashboard API.

Configure via UI

The Tyk Dashboard provides a straightforward interface to set throttling parameters on both Security Policies and Access Keys.
The image below shows a policy with throttling. Any key using this policy will inherit the throttling settings and behaves as follows: wait 2 seconds between retries for queued requests, attempting up to 3 times before failing (so overall 6 seconds before getting another 429 error response).policy with throttling configured

Configure via API

These are the fields that you can set directly in the Policy object or the Access Key:
{
  // Partial policy/session object fields
  "throttle_interval": 2,       // Wait 2 second between retries
  "throttle_retry_limit": 3,    // Attempt a maximum of 3 retries
  // ... more config follows
}
``` expandable

<Tabs>

<Tab title="Security Policy">

To update the policy, do the following:
1. Retrieve the policy object using `GET /api/portal/policies/{POLICY_ID}`
2. Add or modify the `throttle_interval` and `throttle_retry_limit` fields within the policy JSON object
3. Update the policy using `PUT /api/portal/policies/{POLICY_ID}` with the modified object, or create a new one using `POST /api/portal/policies/`

**Explanation:**
The above adds throttling to a policy. Any key using this policy will inherit the throttling settings and behaves as follows: wait 1 second between retries for queued requests, attempting up to 5 times before failing (so overall 5 seconds before getting another 429 error response).

</Tab>

<Tab title="Access Key">

Note: Direct key configuration overrides policy settings only for that specific key.

To update the access key do the following:
1. Retrieve the key's session object using `GET /api/keys/{KEY_ID}`
2. Add or modify the `throttle_interval` and `throttle_retry_limit` fields within the session object JSON
3. Update the key using `PUT /api/keys/{KEY_ID}` with the modified session object


**Explanation:**
The above adds throttling to a key. Any request made by the key will behave as follows: wait 1 second between retries for queued requests, attempting up to 5 times before failing (so overall 5 seconds before getting another 429 error response).

</Tab>

</Tabs>

---
## How It Works

```mermaid
flowchart LR
    A[Client Request] --> GW(Tyk Gateway);

    subgraph Rate Limits
        GW --> RL{Rate Limit OK?};
        RL -- Yes --> Q{Quota OK?};
        RL -- No --> T{Throttle Enabled?};
        Q -- Yes --> Fwd[Forward Request];
        Q -- No --> Reject[Reject Request];
    end

    subgraph Throttling Logic
        T -- No --> Reject;
        T -- Yes --> Queue[Queue Request];
        Queue --> Wait[Wait ThrottleInterval];
        Wait --> RetryL{Retry Limit Reached?};
        RetryL -- Yes --> Reject;
        RetryL -- No --> Recheck(Re-evaluate Rate Limit Only);
        %% Loop back to rate limit check only %%
        Recheck --> RL;
    end

    Fwd --> Backend((Upstream Service));
    Backend --> Success((Success Response));
    Success --> Client;
    Reject --> Failure((Failure Response));
    Failure --> Client;
Tyk’s Request Throttling intercepts API requests after they have exceeded a configured Rate Limit. Instead of immediately rejecting these requests with a 429 Too Many Requests error (which is the default rate-limiting behaviour), the Gateway temporarily holds them in a queue. After waiting for a specified duration (throttle_interval), Tyk attempts to process the request again, re-checking the rate limit status. This retry cycle repeats until either the request can be successfully processed (if capacity becomes available) or a configured maximum number of retries (throttle_retry_limit) is reached. Only after exhausting all retries does Tyk return the 429 error to the client. Think of it like trying to access a service with a restriction on how many people can enter per minute (Rate Limit). If you arrive when the per-minute limit is full, standard behaviour is to turn you awa y immediately. With Throttling enabled, the service instead asks you to wait briefly (the interval) and tries your entry again shortly, checking if the rate limit has freed up capacity, repeating this a f ew times (the retry limit) before finally turning you away if access is still restricted.

FAQ

Request Throttling in Tyk is a mechanism that allows for graceful handling of rate limit violations. Instead of immediately rejecting requests that exceed rate limits, throttling gives clients a chance to retry after a specified delay.
Rate Limiting is a mechanism to restrict the number of requests a client can make in a given time period (e.g., 100 requests per minute). Request Throttling is an extension of rate limiting that provides a retry mechanism when rate limits are exceeded. Instead of immediately failing with a 429 status code, throttling allows the gateway to wait and retry the request internally.
No, Request Throttling in Tyk is exclusively linked to rate limits and does not work with request quotas. When a quota is exceeded, the request is immediately rejected without any throttling or retry attempts. Throttling is only applied when rate limits are exceeded.
Refer to this documentation.
Request Throttling can increase response times for requests that exceed rate limits, as the gateway will wait for the specified ThrottleInterval between retry attempts. The maximum additional latency would be ThrottleInterval × ThrottleRetryLimit seconds. This trade-off provides better success rates at the cost of potentially longer response times for some requests.
Yes, Tyk tracks throttled requests in its health check metrics. You can monitor the ThrottledRequestsPS (throttled requests per second) metric to see how often requests are being throttled. Additionally, when a request is throttled, Tyk emits a RateLimitExceeded event that can be captured in your monitoring system.
No, Request Throttling is not enabled by default. To enable throttling, you need to explicitly set ThrottleRetryLimit to a value greater than 0 and configure an appropriate ThrottleInterval. These settings can be applied through policies or directly in access keys.