TruGraph Tutorial: Instrumenting LangGraph Applications with OTel¶
This notebook demonstrates how to use TruGraph to instrument LangGraph applications for evaluation and monitoring.
Overview¶
TruGraph provides:
- Automatic detection of LangGraph applications
- Combined instrumentation of both LangChain and LangGraph components
- Multi-agent evaluation capabilities
- Automatic @task instrumentation with intelligent attribute extraction
Installation¶
First, make sure you have the required packages installed:
# Install required packages
#!pip install trulens-apps-langgraph langgraph langchain-core langchain-openai langchain-community
Example 1: Simple Multi-Agent Workflow¶
Let's create a basic multi-agent workflow to generate topics and jokes.
import operator
import os
from langgraph.graph import StateGraph, START, END
from langgraph.types import Send
from typing_extensions import TypedDict, Annotated
from trulens.core.session import TruSession
os.environ["TRULENS_OTEL_TRACING"] = "1"
session = TruSession()
session.reset_database()
# Check if LangGraph is available
try:
from langgraph.graph import StateGraph, MessagesState, END, START
from langchain_core.messages import HumanMessage, AIMessage
from trulens.apps.langgraph import TruGraph
print("✅ LangGraph and TruGraph are available!")
LANGGRAPH_AVAILABLE = True
except ImportError as e:
raise ImportError(f"❌ LangGraph not available: {e}")
class OverallState(TypedDict):
topic: str
subjects: list[str]
jokes: Annotated[list[str], operator.add]
best_selected_joke: str
def generate_topics(state: OverallState):
return {"subjects": ["lions", "elephants", "penguins"]}
def generate_joke(state: OverallState):
joke_map = {
"lions": "Why don't lions like fast food? Because they can't catch it!",
"elephants": "Why don't elephants use computers? They're afraid of the mouse!",
"penguins": "Why don't penguins like talking to strangers at parties? Because they find it hard to break the ice."
}
return {"jokes": [joke_map[state["subject"]]]}
def continue_to_jokes(state: OverallState):
return [Send("generate_joke", {"subject": s}) for s in state["subjects"]]
def best_joke(state: OverallState):
return {"best_selected_joke": "penguins"}
builder = StateGraph(OverallState)
builder.add_node("generate_topics", generate_topics)
builder.add_node("generate_joke", generate_joke)
builder.add_node("best_joke", best_joke)
builder.add_edge(START, "generate_topics")
builder.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"])
builder.add_edge("generate_joke", "best_joke")
builder.add_edge("best_joke", END)
builder.add_edge("generate_topics", END)
graph = builder.compile()
tru_simple_graph = TruGraph(graph, app_name="tru_simple_graph", app_version="v1.0", run_name="run_1")
with tru_simple_graph as recording:
graph.invoke({"topic": "animals"})
View the traces in streamlit dashboard¶
from trulens.dashboard import run_dashboard
run_dashboard(session)
Example 2: Test Auto-Detection¶
Let's test whether TruSession can automatically detect our LangGraph application:
Automatic @task Detection¶
One of the key features of TruGraph is its ability to automatically detect and instrument functions decorated with LangGraph's @task
decorator. This means you can use standard LangGraph patterns without any additional instrumentation code.
How it works:¶
- Automatic Detection: TruGraph automatically scans for functions decorated with
@task
- Smart Attribute Extraction: It intelligently extracts information from function arguments:
- Handles
BaseChatModel
andBaseModel
objects - Extracts data from dataclasses and Pydantic models
- Skips non-serializable objects like LLM pools
- Captures return values and exceptions
- Handles
- Seamless Integration: No additional decorators or code changes required
Example Usage:¶
from langgraph.func import task
@task # This is automatically detected and instrumented by TruGraph
def my_agent_function(state, config):
# Your agent logic here
return updated_state
The instrumentation happens automatically when you create a TruGraph instance - no manual setup required!
@task Example¶
Create a real LangGraph application using the @task
decorator to see automatic instrumentation in action:
import os
os.environ["TRULENS_OTEL_TRACING"] = "1"
import pandas as pd
from trulens.apps.langgraph import TruGraph
import time
import uuid
from langgraph.func import entrypoint, task
from langgraph.types import interrupt
from langgraph.checkpoint.memory import MemorySaver
from trulens.core.session import TruSession
session = TruSession()
session.reset_database()
@task
def write_essay(topic: str) -> str:
"""Write an essay about the given topic."""
time.sleep(2) # This is a placeholder for a long-running task.
return f"An essay about topic: {topic}"
@entrypoint(checkpointer=MemorySaver())
def workflow(topic: str) -> dict:
"""A simple workflow that writes an essay and asks for a review."""
essay = write_essay("cat").result()
is_approved = interrupt({
# Any json-serializable payload provided to interrupt as argument.
# It will be surfaced on the client side as an Interrupt when streaming data
# from the workflow.
"essay": essay, # The essay we want reviewed.
# We can add any additional information that we need.
# For example, introduce a key called "action" with some instructions.
"action": "Please approve/reject the essay",
})
return {
"essay": essay, # The essay that was generated
"is_approved": is_approved, # Response from HIL
}
thread_id = str(uuid.uuid4())
config = {
"configurable": {
"thread_id": thread_id
}
}
class ComplexRAGAgent:
def __init__(self):
self.workflow = workflow
def run(self, topic: str) -> dict:
return self.workflow.invoke(topic, config)
complex_agent = ComplexRAGAgent()
tru_graph_complex_agent = TruGraph(complex_agent, app_name="essay_writer", app_version="v1.0", run_name="run_1")
with tru_graph_complex_agent as app:
complex_agent.run("cat")
session.force_flush()
from trulens.dashboard import run_dashboard
run_dashboard(session)
🎯 Key Benefits of Custom Class Support¶
1. Automatic Detection:
- TruGraph automatically finds LangGraph components within your custom classes
- No need to manually specify what to instrument
2. Flexible Method Selection:
- Auto-detects common methods like
run()
,invoke()
,execute()
,call()
,__call__()
- Or explicitly specify:
TruGraph(app, main_method=app.custom_method)
3. Comprehensive Tracing:
- Instruments both your custom orchestration logic AND internal LangGraph workflows
- Captures the full execution flow across multiple LangGraph invocations
4. Multi-Workflow Support:
- Perfect for complex agents with planning → retrieval → synthesis patterns
- Handles parallel workflow execution
- Maintains trace relationships across workflow boundaries
5. Business Logic Integration:
- Your custom preprocessing/postprocessing steps are included in traces
- Evaluate end-to-end performance, not just individual LangGraph components
- Better insights into real-world application behavior
Usage Patterns:
# Simple case - auto-detect everything
tru_app = TruGraph(my_custom_agent)
# Explicit main method
tru_app = TruGraph(my_custom_agent, main_method=my_custom_agent.process)
# Multiple methods instrumented separately
tru_fast = TruGraph(agent, main_method=agent.quick_mode, app_version="fast")
tru_full = TruGraph(agent, main_method=agent.full_mode, app_version="comprehensive")