This tutorial reveals learn how to design a contract-first agent resolution system utilizing: Pidantic AIdeal with structured schemas as non-negotiable governance contracts moderately than elective output codecs. We present learn how to outline rigorous decision-making fashions that encode coverage compliance, threat evaluation, confidence adjustment, and executable subsequent steps immediately into the agent’s output schema. Pydantic validators mixed with PydanticAI’s retry and self-correction mechanisms make sure that brokers can’t make logically inconsistent or non-compliant selections. All through our workflow, we give attention to constructing enterprise-grade decision-making brokers that purpose beneath constraints, making them appropriate for real-world threat, compliance, and governance eventualities moderately than toy prompt-based demos. Please verify Full code here.
!pip -q set up -U pydantic-ai pydantic openai nest_asyncio
import os
import time
import asyncio
import getpass
from dataclasses import dataclass
from typing import Listing, Literal
import nest_asyncio
nest_asyncio.apply()
from pydantic import BaseModel, Area, field_validator
from pydantic_ai import Agent
from pydantic_ai.fashions.openai import OpenAIChatModel
from pydantic_ai.suppliers.openai import OpenAIProvider
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
strive:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
besides Exception:
OPENAI_API_KEY = None
if not OPENAI_API_KEY:
OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()
Arrange the execution surroundings by putting in the required libraries and configuring Google Colab for asynchronous execution. Securely load your OpenAI API key and make sure that the runtime is able to deal with asynchronous agent calls. This establishes a steady basis for operating contract-first brokers with out environment-related points. Please verify Full code here.
class RiskItem(BaseModel):
threat: str = Area(..., min_length=8)
severity: Literal["low", "medium", "high"]
mitigation: str = Area(..., min_length=12)
class DecisionOutput(BaseModel):
resolution: Literal["approve", "approve_with_conditions", "reject"]
confidence: float = Area(..., ge=0.0, le=1.0)
rationale: str = Area(..., min_length=80)
identified_risks: Listing[RiskItem] = Area(..., min_length=2)
compliance_passed: bool
situations: Listing[str] = Area(default_factory=checklist)
next_steps: Listing[str] = Area(..., min_length=3)
timestamp_unix: int = Area(default_factory=lambda: int(time.time()))
@field_validator("confidence")
@classmethod
def confidence_vs_risk(cls, v, information):
dangers = information.knowledge.get("identified_risks") or []
if any(r.severity == "excessive" for r in dangers) and v > 0.70:
increase ValueError("confidence too excessive given high-severity dangers")
return v
@field_validator("resolution")
@classmethod
def reject_if_non_compliant(cls, v, information):
if information.knowledge.get("compliance_passed") is False and v != "reject":
increase ValueError("non-compliant selections have to be reject")
return v
@field_validator("situations")
@classmethod
def conditions_required_for_conditional_approval(cls, v, information):
d = information.knowledge.get("resolution")
if d == "approve_with_conditions" and (not v or len(v) < 2):
increase ValueError("approve_with_conditions requires at the very least 2 situations")
if d == "approve" and v:
increase ValueError("approve should not embody situations")
return v
We outline our core resolution contract utilizing a rigorous Pydantic mannequin that precisely describes legitimate selections. Encode logical constraints similar to belief and threat changes, compliance-driven rejections, and conditional approvals immediately into your schema. This forces the agent’s output to fulfill enterprise logic in addition to syntactic construction. Please verify Full code here.
@dataclass
class DecisionContext:
company_policy: str
risk_threshold: float = 0.6
mannequin = OpenAIChatModel(
"gpt-5",
supplier=OpenAIProvider(api_key=OPENAI_API_KEY),
)
agent = Agent(
mannequin=mannequin,
deps_type=DecisionContext,
output_type=DecisionOutput,
system_prompt="""
You're a company resolution evaluation agent.
You could consider threat, compliance, and uncertainty.
All outputs should strictly fulfill the DecisionOutput schema.
"""
)
Inject the enterprise context by way of a typed dependency object and initialize the PydanticAI agent powered by OpenAI. Configure the agent to solely produce structured resolution output that adheres to predefined contracts. This step formalizes the separation of enterprise context and mannequin inference. Please verify Full code here.
@agent.output_validator
def ensure_risk_quality(end result: DecisionOutput) -> DecisionOutput:
if len(end result.identified_risks) < 2:
increase ValueError("minimal two dangers required")
if not any(r.severity in ("medium", "excessive") for r in end result.identified_risks):
increase ValueError("at the very least one medium or excessive threat required")
return end result
@agent.output_validator
def enforce_policy_controls(end result: DecisionOutput) -> DecisionOutput:
coverage = CURRENT_DEPS.company_policy.decrease()
textual content = (
end result.rationale
+ " ".be a part of(end result.next_steps)
+ " ".be a part of(end result.situations)
).decrease()
if end result.compliance_passed:
if not any(okay in textual content for okay in ["encryption", "audit", "logging", "access control", "key management"]):
increase ValueError("lacking concrete safety controls")
return end result
Add an output validator to behave as a governance checkpoint after the mannequin produces a response. Drive brokers to determine significant dangers and explicitly reference particular safety controls when making compliance claims. Violating these constraints will set off automated retries and drive self-correction. Please verify Full code here.
async def run_decision():
world CURRENT_DEPS
CURRENT_DEPS = DecisionContext(
company_policy=(
"No deployment of techniques dealing with private knowledge or transaction metadata "
"with out encryption, audit logging, and least-privilege entry management."
)
)
immediate = """
Choice request:
Deploy an AI-powered buyer analytics dashboard utilizing a third-party cloud vendor.
The system processes person habits and transaction metadata.
Audit logging isn't applied and customer-managed keys are unsure.
"""
end result = await agent.run(immediate, deps=CURRENT_DEPS)
return end result.output
resolution = asyncio.run(run_decision())
from pprint import pprint
pprint(resolution.model_dump())
Run the agent in opposition to life like resolution requests and seize validated, structured output. Exhibits how brokers consider threat, coverage compliance, and trustworthiness earlier than making a closing resolution. This completes the end-to-end contract precedence decision-making workflow in operational fashion setup.
In conclusion, we present how PydanticAI can be utilized to maneuver from free-form LLM output to a managed and dependable decision-making system. We present that by imposing laborious contracts on the schema degree, selections could be robotically aligned with coverage necessities, threat severity, and belief realism with out handbook immediate tuning. This strategy permits us to construct brokers that fail safely, self-correct when constraints are violated, and produce structured, auditable output that downstream techniques can belief. Finally, we demonstrated that contract-first agent design permits agent AI to be deployed as a trusted decision-making layer inside manufacturing and enterprise environments.
Please verify Full code here. Additionally, be happy to comply with us Twitter Remember to hitch us 100,000+ ML subreddits and subscribe our newsletter. hold on! Are you on telegram? You can now also participate by telegram.
Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of synthetic intelligence for social good. His newest endeavor is the launch of Marktechpost, a man-made intelligence media platform. It stands out for its thorough protection of machine studying and deep studying information, which is technically sound and simply understood by a large viewers. The platform boasts over 2 million views per 30 days, demonstrating its recognition amongst viewers.

