This complete tutorial will information you thru creating a robust multitool AI agent utilizing Langgraph and Claude optimized for a wide range of duties, together with mathematical calculations, internet search, climate enquiries, textual content evaluation, and real-time data retrieval. It begins by simplifying dependency installations to make sure a straightforward setup even for novices. Customers will then showcase structured implementations of specialised instruments, together with safe calculators, environment friendly web-search utilities using Duckduckgo, simulated climate data suppliers, detailed textual content evaluation gear, and time fetching capabilities. This tutorial clearly illustrates the combination of those instruments inside a complicated agent structure constructed utilizing Langgraph, demonstrates sensible use by interactive examples and clear explanations, promotes each novices and superior builders, and rapidly deploys customized multifunctional AI brokers.
import subprocess
import sys
def install_packages():
packages = [
"langgraph",
"langchain",
"langchain-anthropic",
"langchain-community",
"requests",
"python-dotenv",
"duckduckgo-search"
]
for package deal in packages:
attempt:
subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
print(f"✓ Put in {package deal}")
besides subprocess.CalledProcessError:
print(f"✗ Failed to put in {package deal}")
print("Putting in required packages...")
install_packages()
print("Set up full!n")
Automates the set up of the required Python packages required to construct a Langgraph-based multitool AI agent. Benefit from subprocesses to run PIP instructions quietly, and every package deal is efficiently put in, starting from long-chain elements to internet search and environmental processing instruments. This setup streamlines the atmosphere preparation course of and makes your pocket book moveable and beginner-friendly.
import os
import json
import math
import requests
from typing import Dict, Checklist, Any, Annotated, TypedDict
from datetime import datetime
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.instruments import software
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.reminiscence import MemorySaver
from duckduckgo_search import DDGS
Import all of the libraries and modules you must construct a multitool AI agent. It consists of generic OS, Python normal libraries similar to JSON, MATH, and DATETIME, in addition to exterior libraries similar to duckduckgo_search for implementing HTTP name requests and internet searches. The Langchain and Langgraph ecosystems carry message sorts, software decorators, state graph elements, and checkpoint utilities, however Chatanthropic permits integration with the Claude mannequin for dialog intelligence. These imports kind the muse constructing blocks for outlining instruments, agent workflows, and interactions.
os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Right here"
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
Units and retrieves the factitious API keys wanted to authenticate and work together with the Claude mannequin. The OS.Environ line assigns an API key (which should be changed with a legitimate key) and OS.GetENV is securely retrieved for later use in mannequin initialization. This strategy makes the keys accessible throughout the script with out hardcoded a number of instances.
from typing import TypedDict
class AgentState(TypedDict):
messages: Annotated[List[BaseMessage], operator.add]
@software
def calculator(expression: str) -> str:
"""
Carry out mathematical calculations. Helps fundamental arithmetic, trigonometry, and extra.
Args:
expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)")
Returns:
Results of the calculation as a string
"""
attempt:
allowed_names = {
'abs': abs, 'spherical': spherical, 'min': min, 'max': max,
'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
'log': math.log, 'log10': math.log10, 'exp': math.exp,
'pi': math.pi, 'e': math.e
}
expression = expression.change('^', '**')
end result = eval(expression, {"__builtins__": {}}, allowed_names)
return f"End result: {end result}"
besides Exception as e:
return f"Error in calculation: {str(e)}"
Defines the inner state of the agent and implements a sturdy calculator. The AgentState class makes use of TypedDict to configure agent reminiscence and particularly tracks messages exchanged throughout conversations. The calculator capabilities adorned with @software register as obtainable utilities to soundly consider mathematical formulation. Secure calculations are potential by proscribing obtainable capabilities to a set outlined from the Math module, and changing widespread syntax similar to ^ with Python’s exponent operators. This enables the software to deal with easy arithmetic and superior capabilities similar to trigonometry and logarithms, whereas stopping unsafe code execution.
@software
def web_search(question: str, num_results: int = 3) -> str:
"""
Search the online for data utilizing DuckDuckGo.
Args:
question: Search question string
num_results: Variety of outcomes to return (default: 3, max: 10)
Returns:
Search outcomes as formatted string
"""
attempt:
num_results = min(max(num_results, 1), 10)
with DDGS() as ddgs:
outcomes = record(ddgs.textual content(question, max_results=num_results))
if not outcomes:
return f"No search outcomes discovered for: {question}"
formatted_results = f"Search outcomes for '{question}':nn"
for i, lead to enumerate(outcomes, 1):
formatted_results += f"{i}. **{end result['title']}**n"
formatted_results += f" {end result['body']}n"
formatted_results += f" Supply: {end result['href']}nn"
return formatted_results
besides Exception as e:
return f"Error performing internet search: {str(e)}"
DuckDuckgo_Search Defines a Web_Search software that enables brokers to retrieve real-time data from the Web utilizing the DuckDuckgo Search API through the Python package deal. This software accepts the search question and optionally available num_results parameter and ensures that the variety of outcomes returned is between 1 and 10. Open a Duck Duck Go search session, get outcomes and kind neatly for a user-friendly show. If no outcomes are discovered or an error happens, the function handles gracefully by returning a helpful message. This software equips the agent with real-time search capabilities to enhance responsiveness and utility.
@software
def weather_info(metropolis: str) -> str:
"""
Get present climate data for a metropolis utilizing OpenWeatherMap API.
Word: It is a mock implementation for demo functions.
Args:
metropolis: Title of town
Returns:
Climate data as a string
"""
mock_weather = {
"big apple": {"temp": 22, "situation": "Partly Cloudy", "humidity": 65},
"london": {"temp": 15, "situation": "Wet", "humidity": 80},
"tokyo": {"temp": 28, "situation": "Sunny", "humidity": 70},
"paris": {"temp": 18, "situation": "Overcast", "humidity": 75}
}
city_lower = metropolis.decrease()
if city_lower in mock_weather:
climate = mock_weather[city_lower]
return f"Climate in {metropolis}:n"
f"Temperature: {climate['temp']}°Cn"
f"Situation: {climate['condition']}n"
f"Humidity: {climate['humidity']}%"
else:
return f"Climate knowledge not obtainable for {metropolis}. (It is a demo with restricted cities: New York, London, Tokyo, Paris)"
Defines a Weather_Info software that simulates the retrieval of present climate knowledge for a specific metropolis. It isn’t related to the Reside Climate API, however makes use of a predefined dictionary of mock knowledge for main cities similar to New York, London, Tokyo, and Paris. When it receives a metropolis title, the operate normalizes it to lowercase and checks its presence within the mock dataset. If discovered, it returns temperature, climate circumstances and humidity in an easy-to-read format. In any other case, it should notify the person that climate knowledge is unavailable. This software acts as a placeholder and will be upgraded later to retrieve dwell knowledge from the precise Climate API.
@software
def text_analyzer(textual content: str) -> str:
"""
Analyze textual content and supply statistics like phrase depend, character depend, and many others.
Args:
textual content: Textual content to research
Returns:
Textual content evaluation outcomes
"""
if not textual content.strip():
return "Please present textual content to research."
phrases = textual content.cut up()
sentences = textual content.cut up('.') + textual content.cut up('!') + textual content.cut up('?')
sentences = [s.strip() for s in sentences if s.strip()]
evaluation = f"Textual content Evaluation Outcomes:n"
evaluation += f"• Characters (with areas): {len(textual content)}n"
evaluation += f"• Characters (with out areas): {len(textual content.change(' ', ''))}n"
evaluation += f"• Phrases: {len(phrases)}n"
evaluation += f"• Sentences: {len(sentences)}n"
evaluation += f"• Common phrases per sentence: {len(phrases) / max(len(sentences), 1):.1f}n"
evaluation += f"• Most typical phrase: {max(set(phrases), key=phrases.depend) if phrases else 'N/A'}"
return evaluation
The text_analyzer software gives detailed statistical evaluation of particular textual content inputs. Calculate metrics similar to character counts (with or with out areas), variety of phrases, variety of sentences, and common phrases per sentence to determine essentially the most regularly occurring phrases. The software gracefully processes empty enter by asking the person to supply legitimate textual content. Use easy string manipulation and Python units and MAX options to extract significant insights. It is a helpful utility for language evaluation or content material high quality checks in AI Agent toolkits.
@software
def current_time() -> str:
"""
Get the present date and time.
Returns:
Present date and time as a formatted string
"""
now = datetime.now()
return f"Present date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"
The Current_Time software gives a straightforward strategy to get the date and time of the present system in human-readable format. Use Python’s DateTime module to seize the present second and format it as yyyy-mm-dd hh:mm:ss. This utility is especially helpful for responding to timestamps and answering person queries in regards to the present date and time throughout the AI agent interplay circulation.
instruments = [calculator, web_search, weather_info, text_analyzer, current_time]
def create_llm():
if ANTHROPIC_API_KEY:
return ChatAnthropic(
mannequin="claude-3-haiku-20240307",
temperature=0.1,
max_tokens=1024
)
else:
class MockLLM:
def invoke(self, messages):
last_message = messages[-1].content material if messages else ""
if any(phrase in last_message.decrease() for phrase in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']):
import re
numbers = re.findall(r'[d+-*/.()sw]+', last_message)
expr = numbers[0] if numbers else "2+2"
return AIMessage(content material="I am going to make it easier to with that calculation.",
tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}])
elif any(phrase in last_message.decrease() for phrase in ['search', 'find', 'look up', 'information about']):
question = last_message.change('seek for', '').change('discover', '').change('search for', '').strip()
if not question or len(question) < 3:
question = "python programming"
return AIMessage(content material="I am going to seek for that data.",
tool_calls=[{"name": "web_search", "args": {"query": query}, "id": "search1"}])
elif any(phrase in last_message.decrease() for phrase in ['weather', 'temperature']):
metropolis = "New York"
phrases = last_message.decrease().cut up()
for i, phrase in enumerate(phrases):
if phrase == 'in' and that i + 1 < len(phrases):
metropolis = phrases[i + 1].title()
break
return AIMessage(content material="I am going to get the climate data.",
tool_calls=[{"name": "weather_info", "args": {"city": city}, "id": "weather1"}])
elif any(phrase in last_message.decrease() for phrase in ['time', 'date']):
return AIMessage(content material="I am going to get the present time.",
tool_calls=[{"name": "current_time", "args": {}, "id": "time1"}])
elif any(phrase in last_message.decrease() for phrase in ['analyze', 'analysis']):
textual content = last_message.change('analyze this textual content:', '').change('analyze', '').strip()
if not textual content:
textual content = "Pattern textual content for evaluation"
return AIMessage(content material="I am going to analyze that textual content for you.",
tool_calls=[{"name": "text_analyzer", "args": {"text": text}, "id": "analyze1"}])
else:
return AIMessage(content material="Hey! I am a multi-tool agent powered by Claude. I can assist with:n• Mathematical calculationsn• Net searchesn• Climate informationn• Textual content analysisn• Present time/datennWhat would you want me that can assist you with?")
def bind_tools(self, instruments):
return self
print("⚠️ Word: Utilizing mock LLM for demo. Add your ANTHROPIC_API_KEY for full performance.")
return MockLLM()
llm = create_llm()
llm_with_tools = llm.bind_tools(instruments)
Initializes the language mannequin that runs the AI agent. If a legitimate human API secret’s obtainable, use the Claude 3 Haiku mannequin for prime quality responses. With out an API key, MockllM is outlined to simulate fundamental software routing conduct primarily based on key phrase matching, permitting brokers to operate offline utilizing restricted performance. The bind_tools methodology can hyperlink outlined instruments to the mannequin and invoke them if needed.
def agent_node(state: AgentState) -> Dict[str, Any]:
"""Fundamental agent node that processes messages and decides on software utilization."""
messages = state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
def should_continue(state: AgentState) -> str:
"""Decide whether or not to proceed with software calls or finish."""
last_message = state["messages"][-1]
if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
return "instruments"
return END
Defines the core choice logic for the agent. The Agent_Node operate processes incoming messages, invokes the language mannequin (utilizing instruments), and returns the mannequin’s response. The should_continue operate evaluates whether or not the mannequin’s response consists of software calls. In that case, route management to the software execution node. In any other case, it tells the circulation to finish the interplay. These options enable dynamic and conditional transitions throughout the agent’s workflow.
def create_agent_graph():
tool_node = ToolNode(instruments)
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.add_node("instruments", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, {"instruments": "instruments", END: END})
workflow.add_edge("instruments", "agent")
reminiscence = MemorySaver()
app = workflow.compile(checkpointer=reminiscence)
return app
print("Creating LangGraph Multi-Software Agent...")
agent = create_agent_graph()
print("✓ Agent created efficiently!n")
Construct a Langgraph-driven workflow that defines the operational construction of an AI agent. Initialize software nodes to deal with software execution and use state graphs to arrange the circulation between agent selections and gear utilization. Add nodes and edges to handle transitions. Begin with the agent, then path to conditional instruments, and loopback if needed. Reminiscence savers are built-in for persistent state monitoring all through the flip. The graph is compiled into an executable software (APP) and allows a structured memory-aware multitool agent that’s appropriate with deployments.
def test_agent():
"""Take a look at the agent with varied queries."""
config = {"configurable": {"thread_id": "test-thread"}}
test_queries = [
"What's 15 * 7 + 23?",
"Search for information about Python programming",
"What's the weather like in Tokyo?",
"What time is it?",
"Analyze this text: 'LangGraph is an amazing framework for building AI agents.'"
]
print("🧪 Testing the agent with pattern queries...n")
for i, question in enumerate(test_queries, 1):
print(f"Question {i}: {question}")
print("-" * 50)
attempt:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
last_message = response["messages"][-1]
print(f"Response: {last_message.content material}n")
besides Exception as e:
print(f"Error: {str(e)}n")
The test_agent operate is a validation utility that ensures that the Langgraph agent responds appropriately in numerous use circumstances. Carry out predefined queries, arithmetic, internet search, climate, time, and textual content evaluation, and print agent responses. Utilizing a constant Thread_id in your configuration will trigger the agent to be invoked on every question. It shows outcomes neatly and helps builders to see software integration and conversational logic earlier than shifting to interactive or manufacturing use.
def chat_with_agent():
"""Interactive chat operate."""
config = {"configurable": {"thread_id": "interactive-thread"}}
print("🤖 Multi-Software Agent Chat")
print("Accessible instruments: Calculator, Net Search, Climate Information, Textual content Analyzer, Present Time")
print("Kind 'give up' to exit, 'assist' for obtainable commandsn")
whereas True:
attempt:
user_input = enter("You: ").strip()
if user_input.decrease() in ['quit', 'exit', 'q']:
print("Goodbye!")
break
elif user_input.decrease() == 'assist':
print("nAvailable instructions:")
print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'")
print("• Net Search: 'Seek for Python tutorials' or 'Discover details about AI'")
print("• Climate: 'Climate in Tokyo' or 'What is the temperature in London?'")
print("• Textual content Evaluation: 'Analyze this textual content: [your text]'")
print("• Present Time: 'What time is it?' or 'Present date'")
print("• give up: Exit the chatn")
proceed
elif not user_input:
proceed
response = agent.invoke(
{"messages": [HumanMessage(content=user_input)]},
config=config
)
last_message = response["messages"][-1]
print(f"Agent: {last_message.content material}n")
besides KeyboardInterrupt:
print("nGoodbye!")
break
besides Exception as e:
print(f"Error: {str(e)}n")
The chat_with_agent operate gives an interactive command line interface for real-time conversations with the Langgraph Multitool Agent. It helps pure language queries and acknowledges instructions similar to “assist” for utilization steerage and “finish” to “finish”. Every person enter is processed by an agent to dynamically choose and invoke the suitable response software. This operate simulates conversational experiences and enhances person engagement by introducing the brokers’ capabilities in processing a wide range of queries, from arithmetic and internet searches to climate, textual content evaluation, and time searches.
if __name__ == "__main__":
test_agent()
print("=" * 60)
print("🎉 LangGraph Multi-Software Agent is prepared!")
print("=" * 60)
chat_with_agent()
def quick_demo():
"""Fast demonstration of agent capabilities."""
config = {"configurable": {"thread_id": "demo"}}
demos = [
("Math", "Calculate the square root of 144 plus 5 times 3"),
("Search", "Find recent news about artificial intelligence"),
("Time", "What's the current date and time?")
]
print("🚀 Fast Demo of Agent Capabilitiesn")
for class, question in demos:
print(f"[{category}] Question: {question}")
attempt:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
print(f"Response: {response['messages'][-1].content material}n")
besides Exception as e:
print(f"Error: {str(e)}n")
print("n" + "="*60)
print("🔧 Utilization Directions:")
print("1. Add your ANTHROPIC_API_KEY to make use of Claude mannequin")
print(" os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'")
print("2. Run quick_demo() for a fast demonstration")
print("3. Run chat_with_agent() for interactive chat")
print("4. The agent helps: calculations, internet search, climate, textual content evaluation, and time")
print("5. Instance: 'Calculate 15*7+23' or 'Seek for Python tutorials'")
print("="*60)
Lastly, we are going to coordinate the execution of the Langgraph Multitool Agent. If the script is run immediately, begin Test_agent() to validate the performance utilizing the pattern question, then launch the interactive chat_with_agent() mode for real-time interplay. The Quick_Demo() operate additionally gives a short introduction to the capabilities of the agent for arithmetic, search, and time queries. The clear utilization directions are printed on the finish to information the person on configuring the API key, working demonstrations, and interacting with the agent. This gives a clean onboarding expertise that enables customers to discover and lengthen the performance of the agent.
In conclusion, this step-by-step tutorial gives helpful perception into constructing an efficient multitool AI agent that leverages the era capabilities of Langgraph and Claude. With a short clarification and sensible demonstration, the information permits customers to combine a wide range of utilities right into a cohesive, interactive system. From advanced calculations to dynamic data retrieval, the agent’s flexibility in performing duties demonstrates the flexibility of recent AI improvement frameworks. Moreover, together with user-friendly options in each testing and interactive chats improves real-world understanding and allows on the spot software in a wide range of contexts. Builders can confidently lengthen and customise AI brokers with this fundamental information.
Please verify Github Notebook. All credit for this research will likely be directed to researchers on this challenge. Additionally, please be happy to observe us Twitter And remember to hitch us 95k+ ml subreddit And subscribe Our Newsletter.
Asif Razzaq is CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, ASIF is dedicated to leveraging the chances of synthetic intelligence for social advantages. His newest efforts are the launch of MarkTechPost, a man-made intelligence media platform. That is distinguished by its detailed protection of machine studying and deep studying information, and is simple to grasp by a technically sound and huge viewers. The platform has over 2 million views every month, indicating its reputation amongst viewers.


