TruGraph Tutorial: Instrumenting LangGraph Applications with OTel¶
This notebook demonstrates how to use TruGraph to instrument LangGraph applications for evaluation and monitoring.
Overview¶
TruGraph provides:
- Automatic detection of LangGraph applications
- Combined instrumentation of both LangChain and LangGraph components
- Multi-agent evaluation capabilities
- Automatic @task instrumentation with intelligent attribute extraction
Installation¶
First, make sure you have the required packages installed:
# Install required packages
#!pip install trulens-apps-langgraph langgraph langchain-core langchain-openai langchain-community
import os
from trulens.core.session import TruSession
os.environ["TRULENS_OTEL_TRACING"] = "1"
session = TruSession()
session.reset_database()
# Check if LangGraph is available
try:
from langgraph.graph import StateGraph, MessagesState, END
from langchain_core.messages import HumanMessage, AIMessage
from trulens.apps.langgraph import TruGraph
print("✅ LangGraph and TruGraph are available!")
LANGGRAPH_AVAILABLE = True
except ImportError as e:
raise ImportError(f"❌ LangGraph not available: {e}")
Example 1: Simple Multi-Agent Workflow¶
Let's create a basic multi-agent workflow with a researcher and writer:
def research_agent(state):
"""Agent that performs research on a topic."""
messages = state.get("messages", [])
if messages:
last_message = messages[-1]
if hasattr(last_message, "content"):
query = last_message.content
else:
query = str(last_message)
# Simulate research (in a real app, this would call external APIs)
research_results = f"Research findings for '{query}': This is a comprehensive analysis of the topic."
return {"messages": [AIMessage(content=research_results)]}
return {"messages": [AIMessage(content="No research query provided")]}
def writer_agent(state):
"""Agent that writes articles based on research."""
messages = state.get("messages", [])
if messages:
last_message = messages[-1]
if hasattr(last_message, "content"):
research_content = last_message.content
else:
research_content = str(last_message)
# Simulate article writing
article = f"Article: Based on the research - {research_content[:100]}..."
return {"messages": [AIMessage(content=article)]}
return {"messages": [AIMessage(content="No research content provided")]}
# Create the workflow
workflow = StateGraph(MessagesState)
workflow.add_node("researcher", research_agent)
workflow.add_node("writer", writer_agent)
workflow.add_edge("researcher", "writer")
workflow.add_edge("writer", END)
workflow.set_entry_point("researcher")
# Compile the graph
graph = workflow.compile()
print("✅ Multi-agent workflow created successfully!")
print(f"Graph type: {type(graph)}")
print(f"Graph module: {graph.__module__}")
config = {
"configurable": {
"thread_id": "1"
}
}
tru_simple_graph = TruGraph(graph, app_name="tru_simple_graph", app_version="v1.0", run_name="run_1")
with tru_simple_graph as recording:
graph.invoke({"topic": "cat"}, config)
Example 2: Test Auto-Detection¶
Let's test whether TruSession can automatically detect our LangGraph application:
Automatic @task Detection¶
One of the key features of TruGraph is its ability to automatically detect and instrument functions decorated with LangGraph's @task
decorator. This means you can use standard LangGraph patterns without any additional instrumentation code.
How it works:¶
- Automatic Detection: TruGraph automatically scans for functions decorated with
@task
- Smart Attribute Extraction: It intelligently extracts information from function arguments:
- Handles
BaseChatModel
andBaseModel
objects - Extracts data from dataclasses and Pydantic models
- Skips non-serializable objects like LLM pools
- Captures return values and exceptions
- Handles
- Seamless Integration: No additional decorators or code changes required
Example Usage:¶
from langgraph.func import task
@task # This is automatically detected and instrumented by TruGraph
def my_agent_function(state, config):
# Your agent logic here
return updated_state
The instrumentation happens automatically when you create a TruGraph instance - no manual setup required!
@task Example¶
Create a real LangGraph application using the @task
decorator to see automatic instrumentation in action:
import os
os.environ["TRULENS_OTEL_TRACING"] = "1"
import pandas as pd
from trulens.apps.langgraph import TruGraph
import time
import uuid
from langgraph.func import entrypoint, task
from langgraph.types import interrupt
from langgraph.checkpoint.memory import MemorySaver
from trulens.core.session import TruSession
session = TruSession()
session.reset_database()
@task
def write_essay(topic: str) -> str:
"""Write an essay about the given topic."""
time.sleep(2) # This is a placeholder for a long-running task.
return f"An essay about topic: {topic}"
@entrypoint(checkpointer=MemorySaver())
def workflow(topic: str) -> dict:
"""A simple workflow that writes an essay and asks for a review."""
essay = write_essay("cat").result()
is_approved = interrupt({
# Any json-serializable payload provided to interrupt as argument.
# It will be surfaced on the client side as an Interrupt when streaming data
# from the workflow.
"essay": essay, # The essay we want reviewed.
# We can add any additional information that we need.
# For example, introduce a key called "action" with some instructions.
"action": "Please approve/reject the essay",
})
return {
"essay": essay, # The essay that was generated
"is_approved": is_approved, # Response from HIL
}
thread_id = str(uuid.uuid4())
config = {
"configurable": {
"thread_id": thread_id
}
}
class ComplexRAGAgent:
def __init__(self):
self.workflow = workflow
def run(self, topic: str) -> dict:
return self.workflow.invoke(topic, config)
complex_agent = ComplexRAGAgent()
tru_graph_complex_agent = TruGraph(complex_agent, app_name="essay_writer", app_version="v1.0", run_name="run_1")
with tru_graph_complex_agent as app:
complex_agent.run("cat")
session.force_flush()
from trulens.dashboard import run_dashboard
run_dashboard(session)
🎯 Key Benefits of Custom Class Support¶
1. Automatic Detection:
- TruGraph automatically finds LangGraph components within your custom classes
- No need to manually specify what to instrument
2. Flexible Method Selection:
- Auto-detects common methods like
run()
,invoke()
,execute()
,call()
,__call__()
- Or explicitly specify:
TruGraph(app, main_method=app.custom_method)
3. Comprehensive Tracing:
- Instruments both your custom orchestration logic AND internal LangGraph workflows
- Captures the full execution flow across multiple LangGraph invocations
4. Multi-Workflow Support:
- Perfect for complex agents with planning → retrieval → synthesis patterns
- Handles parallel workflow execution
- Maintains trace relationships across workflow boundaries
5. Business Logic Integration:
- Your custom preprocessing/postprocessing steps are included in traces
- Evaluate end-to-end performance, not just individual LangGraph components
- Better insights into real-world application behavior
Usage Patterns:
# Simple case - auto-detect everything
tru_app = TruGraph(my_custom_agent)
# Explicit main method
tru_app = TruGraph(my_custom_agent, main_method=my_custom_agent.process)
# Multiple methods instrumented separately
tru_fast = TruGraph(agent, main_method=agent.quick_mode, app_version="fast")
tru_full = TruGraph(agent, main_method=agent.full_mode, app_version="comprehensive")