LangChain Integration¶
TruLens provides TruChain, a deep integration with LangChain to allow you to inspect and evaluate the internals of your application built using LangChain.
TruChain captures all of the metrics and metadata listed in the instrumentation overview. In addition, TruChain instruments the following LangChain classes:
Instrumented Classes¶
- langchain.chains.base.Chain
- langchain.vectorstores.base.BaseRetriever
- langchain.schema.BaseRetriever
- langchain.llms.base.BaseLLM
- langchain.prompts.base.BasePromptTemplate
- langchain.schema.BaseMemory
- langchain.schema.BaseChatMessageHistory
Example Usage¶
Below is a quick example of usage. First, we'll create a standard LLMChain.
# required imports
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts.chat import HumanMessagePromptTemplate, ChatPromptTemplate, PromptTemplate
from trulens_eval import TruChain
# typical langchain setup
full_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template=
"Provide a helpful response with relevant background information for the following: {prompt}",
input_variables=["prompt"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([full_prompt])
llm = OpenAI(temperature=0.9, max_tokens=128)
chain = LLMChain(llm=llm, prompt=chat_prompt_template, verbose=True)
To instrument an LLM chain, all that's required is to wrap it using TruChain.
# instrument with TruChain
tru_recorder = TruChain(chain)
You can find the full quickstart available here: LangChain Quickstart
Async Support¶
TruChain also provides async support for Langchain through the acall
method. This allows you to track and evaluate async and streaming LangChain applications.
As an example, below is an LLM chain set up with an async callback.
from langchain import LLMChain
from langchain import PromptTemplate
from langchain.callbacks import AsyncIteratorCallbackHandler
from langchain.chains import LLMChain
from langchain.chat_models.openai import ChatOpenAI
from trulens_eval import TruChain
# Set up an async callback.
callback = AsyncIteratorCallbackHandler()
# Setup a simple question/answer chain with streaming ChatOpenAI.
prompt = PromptTemplate.from_template("Honestly answer this question: {question}.")
llm = ChatOpenAI(
temperature=0.0,
streaming=True, # important
callbacks=[callback] # callback can be here or below in acall_with_record
)
async_chain = LLMChain(llm=llm, prompt=prompt)
Once you have created the async LLM chain you can instrument it just as before.
async_tc_recorder = TruChain(async_chain)
with async_tc_recorder as recording:
await async_chain.acall(inputs=dict(question="What is 1+2? Explain your answer."))
For more usage examples, check out the LangChain examples directory.