Agent User Interaction (AG-UI)
Example of using Pydantic AI agents with the AG-UI Dojo example app.
See the AG-UI docs for more information about the AG-UI integration.
Demonstrates:
Prerequisites
Running the Example
With dependencies installed and environment variables set you will need two command line windows.
Pydantic AI AG-UI backend
Setup your OpenAI API Key
export OPENAI_API_KEY=<your api key>
Start the Pydantic AI AG-UI example backend.
python -m pydantic_ai_examples.ag_ui
uv run -m pydantic_ai_examples.ag_ui
AG-UI Dojo example frontend
Next run the AG-UI Dojo example frontend.
-
Clone the AG-UI repository
git clone https://github.com/ag-ui-protocol/ag-ui.git
-
Change into to the
ag-ui/typescript-sdk
directorycd ag-ui/typescript-sdk
-
Run the Dojo app following the official instructions
- Visit http://localhost:3000/pydantic-ai
- Select View
Pydantic AI
from the sidebar
Feature Examples
Agentic Chat
This demonstrates a basic agent interaction including Pydantic AI server side tools and AG-UI client side tools.
View the Agentic Chat example.
Agent Tools
time
- Pydantic AI tool to check the current time for a time zonebackground
- AG-UI tool to set the background color of the client window
Agent Prompts
What is the time in New York?
Change the background to blue
A complex example which mixes both AG-UI and Pydantic AI tools:
Perform the following steps, waiting for the response of each step before continuing:
1. Get the time
2. Set the background to red
3. Get the time
4. Report how long the background set took by diffing the two times
Agentic Chat - Code
"""Agentic Chat feature."""
from __future__ import annotations
from datetime import datetime
from zoneinfo import ZoneInfo
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o-mini')
app = agent.to_ag_ui()
@agent.tool_plain
async def current_time(timezone: str = 'UTC') -> str:
"""Get the current time in ISO format.
Args:
timezone: The timezone to use.
Returns:
The current time in ISO format string.
"""
tz: ZoneInfo = ZoneInfo(timezone)
return datetime.now(tz=tz).isoformat()
Agentic Generative UI
Demonstrates a long running task where the agent sends updates to the frontend to let the user know what's happening.
View the Agentic Generative UI example.
Plan Prompts
Create a plan for breakfast and execute it
Agentic Generative UI - Code
"""Agentic Generative UI feature."""
from __future__ import annotations
from textwrap import dedent
from typing import Any, Literal
from pydantic import BaseModel, Field
from ag_ui.core import EventType, StateDeltaEvent, StateSnapshotEvent
from pydantic_ai import Agent
StepStatus = Literal['pending', 'completed']
class Step(BaseModel):
"""Represents a step in a plan."""
description: str = Field(description='The description of the step')
status: StepStatus = Field(
default='pending',
description='The status of the step (e.g., pending, completed)',
)
class Plan(BaseModel):
"""Represents a plan with multiple steps."""
steps: list[Step] = Field(default_factory=list, description='The steps in the plan')
class JSONPatchOp(BaseModel):
"""A class representing a JSON Patch operation (RFC 6902)."""
op: Literal['add', 'remove', 'replace', 'move', 'copy', 'test'] = Field(
description='The operation to perform: add, remove, replace, move, copy, or test',
)
path: str = Field(description='JSON Pointer (RFC 6901) to the target location')
value: Any = Field(
default=None,
description='The value to apply (for add, replace operations)',
)
from_: str | None = Field(
default=None,
alias='from',
description='Source path (for move, copy operations)',
)
agent = Agent(
'openai:gpt-4o-mini',
instructions=dedent(
"""
When planning use tools only, without any other messages.
IMPORTANT:
- Use the `create_plan` tool to set the initial state of the steps
- Use the `update_plan_step` tool to update the status of each step
- Do NOT repeat the plan or summarise it in a message
- Do NOT confirm the creation or updates in a message
- Do NOT ask the user for additional information or next steps
Only one plan can be active at a time, so do not call the `create_plan` tool
again until all the steps in current plan are completed.
"""
),
)
@agent.tool_plain
def create_plan(steps: list[str]) -> StateSnapshotEvent:
"""Create a plan with multiple steps.
Args:
steps: List of step descriptions to create the plan.
Returns:
StateSnapshotEvent containing the initial state of the steps.
"""
plan: Plan = Plan(
steps=[Step(description=step) for step in steps],
)
return StateSnapshotEvent(
type=EventType.STATE_SNAPSHOT,
snapshot=plan.model_dump(),
)
@agent.tool_plain
def update_plan_step(
index: int, description: str | None = None, status: StepStatus | None = None
) -> StateDeltaEvent:
"""Update the plan with new steps or changes.
Args:
index: The index of the step to update.
description: The new description for the step.
status: The new status for the step.
Returns:
StateDeltaEvent containing the changes made to the plan.
"""
changes: list[JSONPatchOp] = []
if description is not None:
changes.append(
JSONPatchOp(
op='replace', path=f'/steps/{index}/description', value=description
)
)
if status is not None:
changes.append(
JSONPatchOp(op='replace', path=f'/steps/{index}/status', value=status)
)
return StateDeltaEvent(
type=EventType.STATE_DELTA,
delta=changes,
)
app = agent.to_ag_ui()
Human in the Loop
Demonstrates simple human in the loop workflow where the agent comes up with a plan and the user can approve it using checkboxes.
Task Planning Tools
generate_task_steps
- AG-UI tool to generate and confirm steps
Task Planning Prompt
Generate a list of steps for cleaning a car for me to review
Human in the Loop - Code
"""Human in the Loop Feature.
No special handling is required for this feature.
"""
from __future__ import annotations
from textwrap import dedent
from pydantic_ai import Agent
agent = Agent(
'openai:gpt-4o-mini',
instructions=dedent(
"""
When planning tasks use tools only, without any other messages.
IMPORTANT:
- Use the `generate_task_steps` tool to display the suggested steps to the user
- Never repeat the plan, or send a message detailing steps
- If accepted, confirm the creation of the plan and the number of selected (enabled) steps only
- If not accepted, ask the user for more information, DO NOT use the `generate_task_steps` tool again
"""
),
)
app = agent.to_ag_ui()
Predictive State Updates
Demonstrates how to use the predictive state updates feature to update the state of the UI based on agent responses, including user interaction via user confirmation.
View the Predictive State Updates example.
Story Tools
write_document
- AG-UI tool to write the document to a windowdocument_predict_state
- Pydantic AI tool that enables document state prediction for thewrite_document
tool
This also shows how to use custom instructions based on shared state information.
Story Example
Starting document text
Bruce was a good dog,
Agent prompt
Help me complete my story about bruce the dog, is should be no longer than a sentence.
Predictive State Updates - Code
"""Predictive State feature."""
from __future__ import annotations
from textwrap import dedent
from pydantic import BaseModel
from ag_ui.core import CustomEvent, EventType
from pydantic_ai import Agent, RunContext
from pydantic_ai.ag_ui import StateDeps
class DocumentState(BaseModel):
"""State for the document being written."""
document: str = ''
agent = Agent('openai:gpt-4o-mini', deps_type=StateDeps[DocumentState])
# Tools which return AG-UI events will be sent to the client as part of the
# event stream, single events and iterables of events are supported.
@agent.tool_plain
def document_predict_state() -> list[CustomEvent]:
"""Enable document state prediction.
Returns:
CustomEvent containing the event to enable state prediction.
"""
return [
CustomEvent(
type=EventType.CUSTOM,
name='PredictState',
value=[
{
'state_key': 'document',
'tool': 'write_document',
'tool_argument': 'document',
},
],
),
]
@agent.instructions()
def story_instructions(ctx: RunContext[StateDeps[DocumentState]]) -> str:
"""Provide instructions for writing document if present.
Args:
ctx: The run context containing document state information.
Returns:
Instructions string for the document writing agent.
"""
return dedent(
f"""You are a helpful assistant for writing documents.
Before you start writing, you MUST call the `document_predict_state`
tool to enable state prediction.
To present the document to the user for review, you MUST use the
`write_document` tool.
When you have written the document, DO NOT repeat it as a message.
If accepted briefly summarize the changes you made, 2 sentences
max, otherwise ask the user to clarify what they want to change.
This is the current document:
{ctx.deps.state.document}
"""
)
app = agent.to_ag_ui(deps=StateDeps(DocumentState()))
Shared State
Demonstrates how to use the shared state between the UI and the agent.
State sent to the agent is detected by a function based instruction. This then validates the data using a custom pydantic model before using to create the instructions for the agent to follow and send to the client using a AG-UI tool.
View the Shared State example.
Recipe Tools
display_recipe
- AG-UI tool to display the recipe in a graphical format
Recipe Example
- Customise the basic settings of your recipe
- Click
Improve with AI
Shared State - Code
"""Shared State feature."""
from __future__ import annotations
from enum import StrEnum
from textwrap import dedent
from pydantic import BaseModel, Field
from ag_ui.core import EventType, StateSnapshotEvent
from pydantic_ai import Agent, RunContext
from pydantic_ai.ag_ui import StateDeps
class SkillLevel(StrEnum):
"""The level of skill required for the recipe."""
BEGINNER = 'Beginner'
INTERMEDIATE = 'Intermediate'
ADVANCED = 'Advanced'
class SpecialPreferences(StrEnum):
"""Special preferences for the recipe."""
HIGH_PROTEIN = 'High Protein'
LOW_CARB = 'Low Carb'
SPICY = 'Spicy'
BUDGET_FRIENDLY = 'Budget-Friendly'
ONE_POT_MEAL = 'One-Pot Meal'
VEGETARIAN = 'Vegetarian'
VEGAN = 'Vegan'
class CookingTime(StrEnum):
"""The cooking time of the recipe."""
FIVE_MIN = '5 min'
FIFTEEN_MIN = '15 min'
THIRTY_MIN = '30 min'
FORTY_FIVE_MIN = '45 min'
SIXTY_PLUS_MIN = '60+ min'
class Ingredient(BaseModel):
"""A class representing an ingredient in a recipe."""
icon: str = Field(
default='ingredient',
description="The icon emoji (not emoji code like '\x1f35e', but the actual emoji like 🥕) of the ingredient",
)
name: str
amount: str
class Recipe(BaseModel):
"""A class representing a recipe."""
skill_level: SkillLevel = Field(
default=SkillLevel.BEGINNER,
description='The skill level required for the recipe',
)
special_preferences: list[SpecialPreferences] = Field(
default_factory=list,
description='Any special preferences for the recipe',
)
cooking_time: CookingTime = Field(
default=CookingTime.FIVE_MIN, description='The cooking time of the recipe'
)
ingredients: list[Ingredient] = Field(
default_factory=list,
description='Ingredients for the recipe',
)
instructions: list[str] = Field(
default_factory=list, description='Instructions for the recipe'
)
class RecipeSnapshot(BaseModel):
"""A class representing the state of the recipe."""
recipe: Recipe = Field(
default_factory=Recipe, description='The current state of the recipe'
)
agent = Agent('openai:gpt-4o-mini', deps_type=StateDeps[RecipeSnapshot])
@agent.tool_plain
def display_recipe(recipe: Recipe) -> StateSnapshotEvent:
"""Display the recipe to the user.
Args:
recipe: The recipe to display.
Returns:
StateSnapshotEvent containing the recipe snapshot.
"""
return StateSnapshotEvent(
type=EventType.STATE_SNAPSHOT,
snapshot={'recipe': recipe},
)
@agent.instructions
def recipe_instructions(ctx: RunContext[StateDeps[RecipeSnapshot]]) -> str:
"""Instructions for the recipe generation agent.
Args:
ctx: The run context containing recipe state information.
Returns:
Instructions string for the recipe generation agent.
"""
return dedent(
f"""
You are a helpful assistant for creating recipes.
IMPORTANT:
- Create a complete recipe using the existing ingredients
- Append new ingredients to the existing ones
- Use the `display_recipe` tool to present the recipe to the user
- Do NOT repeat the recipe in the message, use the tool instead
Once you have created the updated recipe and displayed it to the user,
summarise the changes in one sentence, don't describe the recipe in
detail or send it as a message to the user.
The current state of the recipe is:
{ctx.deps.state.recipe.model_dump_json(indent=2)}
""",
)
app = agent.to_ag_ui(deps=StateDeps(RecipeSnapshot()))
Tool Based Generative UI
Demonstrates customised rendering for tool output with used confirmation.
View the Tool Based Generative UI example.
Haiku Tools
generate_haiku
- AG-UI tool to display a haiku in English and Japanese
Haiku Prompt
Generate a haiku about formula 1
Tool Based Generative UI - Code
"""Tool Based Generative UI feature.
No special handling is required for this feature.
"""
from __future__ import annotations
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o-mini')
app = agent.to_ag_ui()