Built-in Tools
Built-in tools are native tools provided by LLM providers that can be used to enhance your agent's capabilities. Unlike common tools, which are custom implementations that Pydantic AI executes, built-in tools are executed directly by the model provider.
Overview
Pydantic AI supports the following built-in tools:
WebSearchTool
: Allows agents to search the webCodeExecutionTool
: Enables agents to execute code in a secure environmentImageGenerationTool
: Enables agents to generate imagesUrlContextTool
: Enables agents to pull URL contents into their contextMemoryTool
: Enables agents to use memory
These tools are passed to the agent via the builtin_tools
parameter and are executed by the model provider's infrastructure.
Provider Support
Not all model providers support built-in tools. If you use a built-in tool with an unsupported provider, Pydantic AI will raise a UserError
when you try to run the agent.
If a provider supports a built-in tool that is not currently supported by Pydantic AI, please file an issue.
Web Search Tool
The WebSearchTool
allows your agent to search the web,
making it ideal for queries that require up-to-date data.
Provider Support
Provider | Supported | Notes |
---|---|---|
OpenAI Responses | ✅ | Full feature support. To include search results on the BuiltinToolReturnPart that's available via ModelResponse.builtin_tool_calls , enable the OpenAIResponsesModelSettings.openai_include_web_search_sources model setting. |
Anthropic | ✅ | Full feature support |
✅ | No parameter support. No BuiltinToolCallPart or BuiltinToolReturnPart is generated when streaming. Using built-in tools and user tools (including output tools) at the same time is not supported; to use structured output, use PromptedOutput instead. |
|
Groq | ✅ | Limited parameter support. To use web search capabilities with Groq, you need to use the compound models. |
OpenAI Chat Completions | ❌ | Not supported |
Bedrock | ❌ | Not supported |
Mistral | ❌ | Not supported |
Cohere | ❌ | Not supported |
HuggingFace | ❌ | Not supported |
Usage
from pydantic_ai import Agent, WebSearchTool
agent = Agent('anthropic:claude-sonnet-4-0', builtin_tools=[WebSearchTool()])
result = agent.run_sync('Give me a sentence with the biggest news in AI this week.')
print(result.output)
#> Scientists have developed a universal AI detector that can identify deepfake videos.
(This example is complete, it can be run "as is")
With OpenAI, you must use their responses API to access the web search tool.
from pydantic_ai import Agent, WebSearchTool
agent = Agent('openai-responses:gpt-4.1', builtin_tools=[WebSearchTool()])
result = agent.run_sync('Give me a sentence with the biggest news in AI this week.')
print(result.output)
#> Scientists have developed a universal AI detector that can identify deepfake videos.
(This example is complete, it can be run "as is")
Configuration Options
The WebSearchTool
supports several configuration parameters:
from pydantic_ai import Agent, WebSearchTool, WebSearchUserLocation
agent = Agent(
'anthropic:claude-sonnet-4-0',
builtin_tools=[
WebSearchTool(
search_context_size='high',
user_location=WebSearchUserLocation(
city='San Francisco',
country='US',
region='CA',
timezone='America/Los_Angeles',
),
blocked_domains=['example.com', 'spam-site.net'],
allowed_domains=None, # Cannot use both blocked_domains and allowed_domains with Anthropic
max_uses=5, # Anthropic only: limit tool usage
)
],
)
result = agent.run_sync('Use the web to get the current time.')
print(result.output)
#> In San Francisco, it's 8:21:41 pm PDT on Wednesday, August 6, 2025.
(This example is complete, it can be run "as is")
Provider Support
Parameter | OpenAI | Anthropic | Groq |
---|---|---|---|
search_context_size |
✅ | ❌ | ❌ |
user_location |
✅ | ✅ | ❌ |
blocked_domains |
❌ | ✅ | ✅ |
allowed_domains |
❌ | ✅ | ✅ |
max_uses |
❌ | ✅ | ❌ |
Anthropic Domain Filtering
With Anthropic, you can only use either blocked_domains
or allowed_domains
, not both.
Code Execution Tool
The CodeExecutionTool
enables your agent to execute code
in a secure environment, making it perfect for computational tasks, data analysis, and mathematical operations.
Provider Support
Provider | Supported | Notes |
---|---|---|
OpenAI | ✅ | To include code execution output on the BuiltinToolReturnPart that's available via ModelResponse.builtin_tool_calls , enable the OpenAIResponsesModelSettings.openai_include_code_execution_outputs model setting. If the code execution generated images, like charts, they will be available on ModelResponse.images as BinaryImage objects. The generated image can also be used as image output for the agent run. |
✅ | Using built-in tools and user tools (including output tools) at the same time is not supported; to use structured output, use PromptedOutput instead. |
|
Anthropic | ✅ | |
Groq | ❌ | |
Bedrock | ❌ | |
Mistral | ❌ | |
Cohere | ❌ | |
HuggingFace | ❌ |
Usage
from pydantic_ai import Agent, CodeExecutionTool
agent = Agent('anthropic:claude-sonnet-4-0', builtin_tools=[CodeExecutionTool()])
result = agent.run_sync('Calculate the factorial of 15.')
print(result.output)
#> The factorial of 15 is **1,307,674,368,000**.
print(result.response.builtin_tool_calls)
"""
[
(
BuiltinToolCallPart(
tool_name='code_execution',
args={
'code': 'import math\n\n# Calculate factorial of 15\nresult = math.factorial(15)\nprint(f"15! = {result}")\n\n# Let\'s also show it in a more readable format with commas\nprint(f"15! = {result:,}")'
},
tool_call_id='srvtoolu_017qRH1J3XrhnpjP2XtzPCmJ',
provider_name='anthropic',
),
BuiltinToolReturnPart(
tool_name='code_execution',
content={
'content': [],
'return_code': 0,
'stderr': '',
'stdout': '15! = 1307674368000\n15! = 1,307,674,368,000',
'type': 'code_execution_result',
},
tool_call_id='srvtoolu_017qRH1J3XrhnpjP2XtzPCmJ',
timestamp=datetime.datetime(...),
provider_name='anthropic',
),
)
]
"""
(This example is complete, it can be run "as is")
In addition to text output, code execution with OpenAI can generate images as part of their response. Accessing this image via ModelResponse.images
or image output requires the OpenAIResponsesModelSettings.openai_include_code_execution_outputs
model setting to be enabled.
from pydantic_ai import Agent, BinaryImage, CodeExecutionTool
from pydantic_ai.models.openai import OpenAIResponsesModelSettings
agent = Agent(
'openai-responses:gpt-5',
builtin_tools=[CodeExecutionTool()],
output_type=BinaryImage,
model_settings=OpenAIResponsesModelSettings(openai_include_code_execution_outputs=True),
)
result = agent.run_sync('Generate a chart of y=x^2 for x=-5 to 5.')
assert isinstance(result.output, BinaryImage)
(This example is complete, it can be run "as is")
Image Generation Tool
The ImageGenerationTool
enables your agent to generate images.
Provider Support
Provider | Supported | Notes |
---|---|---|
OpenAI Responses | ✅ | Full feature support. Only supported by models newer than gpt-4o . Metadata about the generated image, like the revised_prompt sent to the underlying image model, is available on the BuiltinToolReturnPart that's available via ModelResponse.builtin_tool_calls . |
✅ | No parameter support. Only supported by image generation models like gemini-2.5-flash-image . These models do not support structured output or function tools. These models will always generate images, even if this built-in tool is not explicitly specified. |
|
Anthropic | ❌ | |
Groq | ❌ | |
Bedrock | ❌ | |
Mistral | ❌ | |
Cohere | ❌ | |
HuggingFace | ❌ |
Usage
Generated images are available on ModelResponse.images
as BinaryImage
objects:
from pydantic_ai import Agent, BinaryImage, ImageGenerationTool
agent = Agent('openai-responses:gpt-5', builtin_tools=[ImageGenerationTool()])
result = agent.run_sync('Tell me a two-sentence story about an axolotl with an illustration.')
print(result.output)
"""
Once upon a time, in a hidden underwater cave, lived a curious axolotl named Pip who loved to explore. One day, while venturing further than usual, Pip discovered a shimmering, ancient coin that granted wishes!
"""
assert isinstance(result.response.images[0], BinaryImage)
(This example is complete, it can be run "as is")
Image generation with Google image generation models does not require the ImageGenerationTool
built-in tool to be explicitly specified:
from pydantic_ai import Agent, BinaryImage
agent = Agent('google-gla:gemini-2.5-flash-image')
result = agent.run_sync('Tell me a two-sentence story about an axolotl with an illustration.')
print(result.output)
"""
Once upon a time, in a hidden underwater cave, lived a curious axolotl named Pip who loved to explore. One day, while venturing further than usual, Pip discovered a shimmering, ancient coin that granted wishes!
"""
assert isinstance(result.response.images[0], BinaryImage)
(This example is complete, it can be run "as is")
The ImageGenerationTool
can be used together with output_type=BinaryImage
to get image output. If the ImageGenerationTool
built-in tool is not explicitly specified, it will be enabled automatically:
from pydantic_ai import Agent, BinaryImage
agent = Agent('openai-responses:gpt-5', output_type=BinaryImage)
result = agent.run_sync('Generate an image of an axolotl.')
assert isinstance(result.output, BinaryImage)
(This example is complete, it can be run "as is")
Configuration Options
The ImageGenerationTool
supports several configuration parameters:
from pydantic_ai import Agent, BinaryImage, ImageGenerationTool
agent = Agent(
'openai-responses:gpt-5',
builtin_tools=[
ImageGenerationTool(
background='transparent',
input_fidelity='high',
moderation='low',
output_compression=100,
output_format='png',
partial_images=3,
quality='high',
size='1024x1024',
)
],
output_type=BinaryImage,
)
result = agent.run_sync('Generate an image of an axolotl.')
assert isinstance(result.output, BinaryImage)
(This example is complete, it can be run "as is")
For more details, check the API documentation.
Provider Support
Parameter | OpenAI | |
---|---|---|
background |
✅ | ❌ |
input_fidelity |
✅ | ❌ |
moderation |
✅ | ❌ |
output_compression |
✅ | ❌ |
output_format |
✅ | ❌ |
partial_images |
✅ | ❌ |
quality |
✅ | ❌ |
size |
✅ | ❌ |
URL Context Tool
The UrlContextTool
enables your agent to pull URL contents into its context,
allowing it to pull up-to-date information from the web.
Provider Support
Provider | Supported | Notes |
---|---|---|
✅ | No BuiltinToolCallPart or BuiltinToolReturnPart is currently generated; please submit an issue if you need this. Using built-in tools and user tools (including output tools) at the same time is not supported; to use structured output, use PromptedOutput instead. |
|
OpenAI | ❌ | |
Anthropic | ❌ | |
Groq | ❌ | |
Bedrock | ❌ | |
Mistral | ❌ | |
Cohere | ❌ | |
HuggingFace | ❌ |
Usage
from pydantic_ai import Agent, UrlContextTool
agent = Agent('google-gla:gemini-2.5-flash', builtin_tools=[UrlContextTool()])
result = agent.run_sync('What is this? https://ai.pydantic.dev')
print(result.output)
#> A Python agent framework for building Generative AI applications.
(This example is complete, it can be run "as is")
Memory Tool
The MemoryTool
enables your agent to use memory.
Provider Support
Provider | Supported | Notes |
---|---|---|
Anthropic | ✅ | Requires a tool named memory to be defined that implements specific sub-commands. You can use a subclass of anthropic.lib.tools.BetaAbstractMemoryTool as documented below. |
❌ | ||
OpenAI | ❌ | |
Groq | ❌ | |
Bedrock | ❌ | |
Mistral | ❌ | |
Cohere | ❌ | |
HuggingFace | ❌ |
Usage
The Anthropic SDK provides an abstract BetaAbstractMemoryTool
class that you can subclass to create your own memory storage solution (e.g., database, cloud storage, encrypted files, etc.). Their LocalFilesystemMemoryTool
example can serve as a starting point.
The following example uses a subclass that hard-codes a specific memory. The bits specific to Pydantic AI are the MemoryTool
built-in tool and the memory
tool definition that forwards commands to the call
method of the BetaAbstractMemoryTool
subclass.
from typing import Any
from anthropic.lib.tools import BetaAbstractMemoryTool
from anthropic.types.beta import (
BetaMemoryTool20250818CreateCommand,
BetaMemoryTool20250818DeleteCommand,
BetaMemoryTool20250818InsertCommand,
BetaMemoryTool20250818RenameCommand,
BetaMemoryTool20250818StrReplaceCommand,
BetaMemoryTool20250818ViewCommand,
)
from pydantic_ai import Agent, MemoryTool
class FakeMemoryTool(BetaAbstractMemoryTool):
def view(self, command: BetaMemoryTool20250818ViewCommand) -> str:
return 'The user lives in Mexico City.'
def create(self, command: BetaMemoryTool20250818CreateCommand) -> str:
return f'File created successfully at {command.path}'
def str_replace(self, command: BetaMemoryTool20250818StrReplaceCommand) -> str:
return f'File {command.path} has been edited'
def insert(self, command: BetaMemoryTool20250818InsertCommand) -> str:
return f'Text inserted at line {command.insert_line} in {command.path}'
def delete(self, command: BetaMemoryTool20250818DeleteCommand) -> str:
return f'File deleted: {command.path}'
def rename(self, command: BetaMemoryTool20250818RenameCommand) -> str:
return f'Renamed {command.old_path} to {command.new_path}'
def clear_all_memory(self) -> str:
return 'All memory cleared'
fake_memory = FakeMemoryTool()
agent = Agent('anthropic:claude-sonnet-4-5', builtin_tools=[MemoryTool()])
@agent.tool_plain
def memory(**command: Any) -> Any:
return fake_memory.call(command)
result = agent.run_sync('Remember that I live in Mexico City')
print(result.output)
"""
Got it! I've recorded that you live in Mexico City. I'll remember this for future reference.
"""
result = agent.run_sync('Where do I live?')
print(result.output)
#> You live in Mexico City.
(This example is complete, it can be run "as is")
API Reference
For complete API documentation, see the API Reference.