pydantic_ai.models.openrouter
Setup
For details on how to set up authentication with this model, see model configuration for OpenRouter.
KnownOpenRouterProviders
module-attribute
KnownOpenRouterProviders = Literal[
"z-ai",
"cerebras",
"venice",
"moonshotai",
"morph",
"stealth",
"wandb",
"klusterai",
"openai",
"sambanova",
"amazon-bedrock",
"mistral",
"nextbit",
"atoma",
"ai21",
"minimax",
"baseten",
"anthropic",
"featherless",
"groq",
"lambda",
"azure",
"ncompass",
"deepseek",
"hyperbolic",
"crusoe",
"cohere",
"mancer",
"avian",
"perplexity",
"novita",
"siliconflow",
"switchpoint",
"xai",
"inflection",
"fireworks",
"deepinfra",
"inference-net",
"inception",
"atlas-cloud",
"nvidia",
"alibaba",
"friendli",
"infermatic",
"targon",
"ubicloud",
"aion-labs",
"liquid",
"nineteen",
"cloudflare",
"nebius",
"chutes",
"enfer",
"crofai",
"open-inference",
"phala",
"gmicloud",
"meta",
"relace",
"parasail",
"together",
"google-ai-studio",
"google-vertex",
]
Known providers in the OpenRouter marketplace
OpenRouterProviderName
module-attribute
OpenRouterProviderName = str | KnownOpenRouterProviders
Possible OpenRouter provider names.
Since OpenRouter is constantly updating their list of providers, we explicitly list some known providers but allow any name in the type hints. See the OpenRouter API for a full list.
OpenRouterTransforms
module-attribute
OpenRouterTransforms = Literal['middle-out']
Available messages transforms for OpenRouter models with limited token windows.
Currently only supports 'middle-out', but is expected to grow in the future.
OpenRouterProviderConfig
Bases: TypedDict
Represents the 'Provider' object from the OpenRouter API.
Source code in pydantic_ai_slim/pydantic_ai/models/openrouter.py
135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | |
order
instance-attribute
order: list[OpenRouterProviderName]
List of provider slugs to try in order (e.g. ["anthropic", "openai"]). See details
allow_fallbacks
instance-attribute
allow_fallbacks: bool
Whether to allow backup providers when the primary is unavailable. See details
require_parameters
instance-attribute
require_parameters: bool
Only use providers that support all parameters in your request.
data_collection
instance-attribute
data_collection: Literal['allow', 'deny']
Control whether to use providers that may store data. See details
zdr
instance-attribute
zdr: bool
Restrict routing to only ZDR (Zero Data Retention) endpoints. See details
only
instance-attribute
only: list[OpenRouterProviderName]
List of provider slugs to allow for this request. See details
ignore
instance-attribute
List of provider slugs to skip for this request. See details
quantizations
instance-attribute
quantizations: list[
Literal[
"int4",
"int8",
"fp4",
"fp6",
"fp8",
"fp16",
"bf16",
"fp32",
"unknown",
]
]
List of quantization levels to filter by (e.g. ["int4", "int8"]). See details
sort
instance-attribute
sort: Literal['price', 'throughput', 'latency']
Sort providers by price or throughput. (e.g. "price" or "throughput"). See details
max_price
instance-attribute
max_price: _OpenRouterMaxPrice
The maximum pricing you want to pay for this request. See details
OpenRouterReasoning
Bases: TypedDict
Configuration for reasoning tokens in OpenRouter requests.
Reasoning tokens allow models to show their step-by-step thinking process. You can configure this using either OpenAI-style effort levels or Anthropic-style token limits, but not both simultaneously.
Source code in pydantic_ai_slim/pydantic_ai/models/openrouter.py
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | |
effort
instance-attribute
effort: Literal['high', 'medium', 'low']
OpenAI-style reasoning effort level. Cannot be used with max_tokens.
max_tokens
instance-attribute
max_tokens: int
Anthropic-style specific token limit for reasoning. Cannot be used with effort.
exclude
instance-attribute
exclude: bool
Whether to exclude reasoning tokens from the response. Default is False. All models support this.
enabled
instance-attribute
enabled: bool
Whether to enable reasoning with default parameters. Default is inferred from effort or max_tokens.
OpenRouterUsageConfig
Bases: TypedDict
Configuration for OpenRouter usage.
Source code in pydantic_ai_slim/pydantic_ai/models/openrouter.py
190 191 192 193 | |
OpenRouterModelSettings
Bases: ModelSettings
Settings used for an OpenRouter model request.
Source code in pydantic_ai_slim/pydantic_ai/models/openrouter.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 | |
openrouter_models
instance-attribute
A list of fallback models.
These models will be tried, in order, if the main model returns an error. See details
openrouter_provider
instance-attribute
openrouter_provider: OpenRouterProviderConfig
OpenRouter routes requests to the best available providers for your model. By default, requests are load balanced across the top providers to maximize uptime.
You can customize how your requests are routed using the provider object. See more
openrouter_preset
instance-attribute
openrouter_preset: str
Presets allow you to separate your LLM configuration from your code.
Create and manage presets through the OpenRouter web application to control provider routing, model selection, system prompts, and other parameters, then reference them in OpenRouter API requests. See more
openrouter_transforms
instance-attribute
openrouter_transforms: list[OpenRouterTransforms]
To help with prompts that exceed the maximum context size of a model.
Transforms work by removing or truncating messages from the middle of the prompt, until the prompt fits within the model's context window. See more
openrouter_reasoning
instance-attribute
openrouter_reasoning: OpenRouterReasoning
To control the reasoning tokens in the request.
The reasoning config object consolidates settings for controlling reasoning strength across different models. See more
openrouter_usage
instance-attribute
openrouter_usage: OpenRouterUsageConfig
To control the usage of the model.
The usage config object consolidates settings for enabling detailed usage information. See more
OpenRouterModel
Bases: OpenAIChatModel
Extends OpenAIModel to capture extra metadata for Openrouter.
Source code in pydantic_ai_slim/pydantic_ai/models/openrouter.py
469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 | |
__init__
__init__(
model_name: str,
*,
provider: (
Literal["openrouter"] | Provider[AsyncOpenAI]
) = "openrouter",
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None
)
Initialize an OpenRouter model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the model to use. |
required |
provider
|
Literal['openrouter'] | Provider[AsyncOpenAI]
|
The provider to use for authentication and API access. If not provided, a new provider will be created with the default settings. |
'openrouter'
|
profile
|
ModelProfileSpec | None
|
The model profile to use. Defaults to a profile picked by the provider based on the model name. |
None
|
settings
|
ModelSettings | None
|
Model-specific settings that will be used as defaults for this model. |
None
|
Source code in pydantic_ai_slim/pydantic_ai/models/openrouter.py
472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 | |
OpenRouterStreamedResponse
dataclass
Bases: OpenAIStreamedResponse
Implementation of StreamedResponse for OpenRouter models.
Source code in pydantic_ai_slim/pydantic_ai/models/openrouter.py
599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 | |