Provider Routing
OpenRouter routes requests to the best available providers for your model. By default, requests are load balanced across the top providers to maximize uptime.
You can customize how your requests are routed using the provider object in the request body for Chat Completions and Completions.
The provider object can contain the following fields:
EU data residency (Enterprise)
OpenRouter supports EU in-region routing for enterprise customers. When enabled, prompts and completions are processed entirely within the EU. Learn more in our Privacy docs here. To contact our enterprise team, fill out this form.
Price-Based Load Balancing (Default Strategy)
For each model in your request, OpenRouter’s default behavior is to load balance requests across providers, prioritizing price.
If you are more sensitive to throughput than price, you can use the sort field to explicitly prioritize throughput.
When you send a request with tools or tool_choice, OpenRouter will only
route to providers that support tool use. Similarly, if you set a
max_tokens, then OpenRouter will only route to providers that support a
response of that length.
Here is OpenRouter’s default load balancing strategy:
- Prioritize providers that have not seen significant outages in the last 30 seconds.
- For the stable providers, look at the lowest-cost candidates and select one weighted by inverse square of the price (example below).
- Use the remaining providers as fallbacks.
A Load Balancing Example
If Provider A costs $1 per million tokens, Provider B costs $2, and Provider C costs $3, and Provider B recently saw a few outages.
- Your request is routed to Provider A. Provider A is 9x more likely to be first routed to Provider A than Provider C because (inverse square of the price).
- If Provider A fails, then Provider C will be tried next.
- If Provider C also fails, Provider B will be tried last.
If you have sort or order set in your provider preferences, load balancing will be disabled.
Provider Sorting
As described above, OpenRouter load balances based on price, while taking uptime into account.
If you instead want to explicitly prioritize a particular provider attribute, you can include the sort field in the provider preferences. Load balancing will be disabled, and the router will try providers in order.
The three sort options are:
"price": prioritize lowest price"throughput": prioritize highest throughput"latency": prioritize lowest latency
To always prioritize low prices, and not apply any load balancing, set sort to "price".
To always prioritize low latency, and not apply any load balancing, set sort to "latency".
Nitro Shortcut
You can append :nitro to any model slug as a shortcut to sort by throughput. This is exactly equivalent to setting provider.sort to "throughput".
Floor Price Shortcut
You can append :floor to any model slug as a shortcut to sort by price. This is exactly equivalent to setting provider.sort to "price".
Advanced Sorting with Partition
When using model fallbacks, the sort field can be specified as an object with additional options to control how endpoints are sorted across multiple models.
By default, when you specify multiple models (fallbacks), OpenRouter groups endpoints by model before sorting. This means the primary model’s endpoints are always tried first, regardless of their performance characteristics. Setting partition to "none" removes this grouping, allowing endpoints to be sorted globally across all models.
preferred_max_latency and preferred_min_throughput do not guarantee you will get a provider or model with this performance level. However, providers and models that hit your thresholds will be preferred. Specifying these preferences should therefore never prevent your request from being executed. This is different than max_price, which will prevent your request from running if the price is not available.
Use Case 1: Route to the Highest Throughput or Lowest Latency Model
When you have multiple acceptable models and want to use whichever has the best performance right now, use partition: "none" with throughput or latency sorting. This is useful when you care more about speed than using a specific model.
In this example, OpenRouter will route to whichever endpoint across all three models currently has the highest throughput, rather than always trying Claude first.
Performance Thresholds
You can set minimum throughput or maximum latency thresholds to filter endpoints. Endpoints that don’t meet these thresholds are deprioritized (moved to the end of the list) rather than excluded entirely.
How Percentiles Work
OpenRouter tracks latency and throughput metrics for each model and provider using percentile statistics calculated over a rolling 5-minute window. The available percentiles are:
- p50 (median): 50% of requests perform better than this value
- p75: 75% of requests perform better than this value
- p90: 90% of requests perform better than this value
- p99: 99% of requests perform better than this value
Higher percentiles (like p90 or p99) give you more confidence about worst-case performance, while lower percentiles (like p50) reflect typical performance. For example, if a model and provider has a p90 latency of 2 seconds, that means 90% of requests complete in under 2 seconds.
When you specify multiple percentile cutoffs, all specified cutoffs must be met for a model and provider to be in the preferred group. This allows you to set both typical and worst-case performance requirements.
When to Use Percentile Preferences
Percentile-based routing is useful when you need predictable performance characteristics:
- Real-time applications: Use p90 or p99 latency thresholds to ensure consistent response times for user-facing features
- Batch processing: Use p50 throughput thresholds when you care more about average performance than worst-case scenarios
- SLA compliance: Use multiple percentile cutoffs to ensure providers meet your service level agreements across different performance tiers
- Cost optimization: Combine with
sort: "price"to get the cheapest provider that still meets your performance requirements
Use Case 2: Find the Cheapest Model Meeting Performance Requirements
Combine partition: "none" with performance thresholds to find the cheapest option across multiple models that meets your performance requirements. This is useful when you have a performance floor but want to minimize costs.
In this example, OpenRouter will find the cheapest model and provider across all three models that has at least 50 tokens/second throughput at the p90 level (meaning 90% of requests achieve this throughput or better). Models and providers below this threshold are still available as fallbacks if all preferred options fail.
You can also use preferred_max_latency to set a maximum acceptable latency:
Example: Using Multiple Percentile Cutoffs
You can specify multiple percentile cutoffs to set both typical and worst-case performance requirements. All specified cutoffs must be met for a model and provider to be in the preferred group.
Use Case 3: Maximize BYOK Usage Across Models
If you use Bring Your Own Key (BYOK) and want to maximize usage of your own API keys, partition: "none" can help. When your primary model doesn’t have a BYOK provider available, OpenRouter can route to a fallback model that does support BYOK.
In this example, if you have a BYOK key configured for OpenAI but not for Anthropic, OpenRouter can route to the GPT-4o endpoint using your own key even though Claude is listed first. Without partition: "none", the router would always try Claude’s endpoints first before falling back to GPT-4o.
BYOK endpoints are automatically prioritized when you have API keys configured for a provider. The partition: "none" setting allows this prioritization to work across model boundaries.
Ordering Specific Providers
You can set the providers that OpenRouter will prioritize for your request using the order field.
The router will prioritize providers in this list, and in this order, for the model you’re using. If you don’t set this field, the router will load balance across the top providers to maximize uptime.
You can use the copy button next to provider names on model pages to get the exact provider slug, including any variants like “/turbo”. See Targeting Specific Provider Endpoints for details.
OpenRouter will try them one at a time and proceed to other providers if none are operational. If you don’t want to allow any other providers, you should disable fallbacks as well.
Example: Specifying providers with fallbacks
This example skips over OpenAI (which doesn’t host Mixtral), tries Together, and then falls back to the normal list of providers on OpenRouter:
Example: Specifying providers with fallbacks disabled
Here’s an example with allow_fallbacks set to false that skips over OpenAI (which doesn’t host Mixtral), tries Together, and then fails if Together fails:
Targeting Specific Provider Endpoints
Each provider on OpenRouter may host multiple endpoints for the same model, such as a default endpoint and a specialized “turbo” endpoint. To target a specific endpoint, you can use the copy button next to the provider name on the model detail page to obtain the exact provider slug.
For example, DeepInfra offers DeepSeek R1 through multiple endpoints:
- Default endpoint with slug
deepinfra - Turbo endpoint with slug
deepinfra/turbo
By copying the exact provider slug and using it in your request’s order array, you can ensure your request is routed to the specific endpoint you want:
This approach is especially useful when you want to consistently use a specific variant of a model from a particular provider.
Requiring Providers to Support All Parameters
You can restrict requests only to providers that support all parameters in your request using the require_parameters field.
With the default routing strategy, providers that don’t support all the LLM parameters specified in your request can still receive the request, but will ignore unknown parameters. When you set require_parameters to true, the request won’t even be routed to that provider.
Example: Excluding providers that don’t support JSON formatting
For example, to only use providers that support JSON formatting:
Requiring Providers to Comply with Data Policies
You can restrict requests only to providers that comply with your data policies using the data_collection field.
allow: (default) allow providers which store user data non-transiently and may train on itdeny: use only providers which do not collect user data
Some model providers may log prompts, so we display them with a Data Policy tag on model pages. This is not a definitive source of third party data policies, but represents our best knowledge.
Account-Wide Data Policy Filtering
This is also available as an account-wide setting in your privacy settings. You can disable third party model providers that store inputs for training.
Example: Excluding providers that don’t comply with data policies
To exclude providers that don’t comply with your data policies, set data_collection to deny:
Zero Data Retention Enforcement
You can enforce Zero Data Retention (ZDR) on a per-request basis using the zdr parameter, ensuring your request only routes to endpoints that do not retain prompts.
When zdr is set to true, the request will only be routed to endpoints that have a Zero Data Retention policy. When zdr is false or not provided, it has no effect on routing.
Account-Wide ZDR Setting
This is also available as an account-wide setting in your privacy
settings. The per-request zdr parameter
operates as an “OR” with your account-wide ZDR setting - if either is enabled, ZDR enforcement will be applied. The request-level parameter can only ensure ZDR is enabled, not override account-wide enforcement.
Example: Enforcing ZDR for a specific request
To ensure a request only uses ZDR endpoints, set zdr to true:
This is useful for customers who don’t want to globally enforce ZDR but need to ensure specific requests only route to ZDR endpoints.
Distillable Text Enforcement
You can enforce distillable text filtering on a per-request basis using the enforce_distillable_text parameter, ensuring your request only routes to models where the author has allowed text distillation.
When enforce_distillable_text is set to true, the request will only be routed to models where the author has explicitly enabled text distillation. When enforce_distillable_text is false or not provided, it has no effect on routing.
This parameter is useful for applications that need to ensure their requests only use models that allow text distillation for training purposes, such as when building datasets for model fine-tuning or distillation workflows.
Example: Enforcing distillable text for a specific request
To ensure a request only uses models that allow text distillation, set enforce_distillable_text to true:
Disabling Fallbacks
To guarantee that your request is only served by the top (lowest-cost) provider, you can disable fallbacks.
This is combined with the order field from Ordering Specific Providers to restrict the providers that OpenRouter will prioritize to just your chosen list.
Allowing Only Specific Providers
You can allow only specific providers for a request by setting the only field in the provider object.
Only allowing some providers may significantly reduce fallback options and limit request recovery.
Account-Wide Allowed Providers
You can allow providers for all account requests in your privacy settings. This configuration applies to all API requests and chatroom messages.
Note that when you allow providers for a specific request, the list of allowed providers is merged with your account-wide allowed providers.
Example: Allowing Azure for a request calling GPT-4 Omni
Here’s an example that will only use Azure for a request calling GPT-4 Omni:
Ignoring Providers
You can ignore providers for a request by setting the ignore field in the provider object.
Ignoring multiple providers may significantly reduce fallback options and limit request recovery.
Account-Wide Ignored Providers
You can ignore providers for all account requests in your privacy settings. This configuration applies to all API requests and chatroom messages.
Note that when you ignore providers for a specific request, the list of ignored providers is merged with your account-wide ignored providers.
Example: Ignoring DeepInfra for a request calling Llama 3.3 70b
Here’s an example that will ignore DeepInfra for a request calling Llama 3.3 70b:
Quantization
Quantization reduces model size and computational requirements while aiming to preserve performance. Most LLMs today use FP16 or BF16 for training and inference, cutting memory requirements in half compared to FP32. Some optimizations use FP8 or quantization to reduce size further (e.g., INT8, INT4).
Quantized models may exhibit degraded performance for certain prompts, depending on the method used.
Providers can support various quantization levels for open-weight models.
Quantization Levels
By default, requests are load-balanced across all available providers, ordered by price. To filter providers by quantization level, specify the quantizations field in the provider parameter with the following values:
int4: Integer (4 bit)int8: Integer (8 bit)fp4: Floating point (4 bit)fp6: Floating point (6 bit)fp8: Floating point (8 bit)fp16: Floating point (16 bit)bf16: Brain floating point (16 bit)fp32: Floating point (32 bit)unknown: Unknown
Example: Requesting FP8 Quantization
Here’s an example that will only use providers that support FP8 quantization:
Max Price
To filter providers by price, specify the max_price field in the provider parameter with a JSON object specifying the highest provider pricing you will accept.
For example, the value {"prompt": 1, "completion": 2} will route to any provider with a price of <= $1/m prompt tokens, and <= $2/m completion tokens or less.
Some providers support per request pricing, in which case you can use the request attribute of max_price. Lastly, image is also available, which specifies the max price per image you will accept.
Practically, this field is often combined with a provider sort to express, for example, “Use the provider with the highest throughput, as long as it doesn’t cost more than $x/m tokens.”
Provider-Specific Headers
Some providers support beta features that can be enabled through special headers. OpenRouter allows you to pass through certain provider-specific beta headers when making requests.
Anthropic Beta Features
When using Anthropic models (Claude), you can request specific beta features by including the x-anthropic-beta header in your request. OpenRouter will pass through supported beta features to Anthropic.
Supported Beta Features
OpenRouter manages other Anthropic beta features (like prompt caching and extended context) automatically based on model capabilities. You only need to specify the headers above for features that require explicit opt-in.
Example: Enabling Fine-Grained Tool Streaming
Example: Enabling Interleaved Thinking
Combining Multiple Beta Features
You can enable multiple beta features by separating them with commas:
Beta features are experimental and may change or be deprecated by Anthropic. Check Anthropic’s documentation for the latest information on available beta features.
Terms of Service
You can view the terms of service for each provider below. You may not violate the terms of service or policies of third-party providers that power the models on OpenRouter.