Workshop: Building Durable Deep Research Agents
AI Infrastructure, Agentic AI, Workshop
Using PydanticAI, Temporal, and Pydantic Logfire
Simple LLM interactions work fine, but as you build more complex systems with longer-running workflows, failures become expensive. When your agent has completed several time-consuming steps (searches, data processing, analyses) and then crashes, losing all that progress creates a terrible user experience and wastes significant compute resources. Durable execution solves this by automatically saving workflow state as your agent runs. If interrupted, the system replays completed steps instantly using cached results, then continues from the exact point of failure. No restarting from scratch, no lost compute, no frustrated users.
In this workshop you will create a production-ready Deep Research Agent that plays 20 Questions, but instead of a human guessing, multiple LLM agents will work together to find the answer. You will learn how to use Pydantic AI to build a toy example system with multiple agents that will structure a research plan, run in parallel to gather information, and synthesize results into conclusions. You will use Temporal agent wrappers to ensure the system can recover from failures and see into every step of execution with observability using Pydantic Logfire. You will also learn how to evaluate different model’s performance by using Pydantic AI Evals and visualise them on Pydantic Logfire.
Workshop Outcomes
By the end, you will:
Understand when and how to implement durable execution for AI agents
Have working code for a resilient multi-agent system you can adapt
Know how to add observability to debug and monitor agent behavior
Be able to handle failures gracefully without expensive re-computations
Learn how to gauge models performance by using Evals
Setup Requirements
Software Installation
Python 3.10 or higher (3.12+ recommended)
Code editor (VS Code, PyCharm, Zed, or similar)
Git (for cloning the workshop repository)
Accounts to Create
Pydantic Logfire - Free tier available, sign up before the workshop
No LLM API keys needed! - We will provide free inference through the Pydantic AI Gateway
What to Review Before the Workshop
Required Knowledge
Python fundamentals - async/await, type hints, decorators, context managers
Pydantic Validation basics - models, validation, field types, serialization
LLM concepts - prompts, function/tool calling, structured outputs
Basic async patterns - task groups, concurrent execution, error handling
Recommended Reading:
PydanticAI documentation (especially the introduction and agents section)
Temporal core concepts (workflows vs. activities)
Link to Github repository

Samuel Colvin is a Python and Rust developer and Founder of Pydantic Inc., backed by Sequoia to build Pydantic Logfire — developer first observability. The Pydantic library, which he created is downloaded over 290M/month and is a dependency of many GenAI Python libraries including the OpenAI SDK, the Anthropic SDK, Langchain and LlamaIndex.