AWS Bedrock¶
Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
In this quickstart you will learn how to use AWS Bedrock with all the power of tracking + eval with TruLens.
Note: this example assumes logged in with the AWS CLI. Different authentication methods may change the initial client set up, but the rest should remain the same. To retrieve credentials using AWS sso, you will need to download the aws CLI and run:
aws sso login
aws configure export-credentials
The second command will provide you with various keys you need.
Import from TruLens, Langchain and Boto3¶
In [ ]:
Copied!
# !pip install trulens trulens-apps-langchain trulens-providers-bedrock langchain==0.0.305 boto3==1.28.59
# !pip install trulens trulens-apps-langchain trulens-providers-bedrock langchain==0.0.305 boto3==1.28.59
In [ ]:
Copied!
import boto3
client = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
import boto3
client = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
In [ ]:
Copied!
from langchain import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts.chat import AIMessagePromptTemplate
from langchain.prompts.chat import ChatPromptTemplate
from langchain.prompts.chat import HumanMessagePromptTemplate
from langchain.prompts.chat import SystemMessagePromptTemplate
from langchain import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts.chat import AIMessagePromptTemplate
from langchain.prompts.chat import ChatPromptTemplate
from langchain.prompts.chat import HumanMessagePromptTemplate
from langchain.prompts.chat import SystemMessagePromptTemplate
Create the Bedrock client and the Bedrock LLM¶
In [ ]:
Copied!
bedrock_llm = Bedrock(model_id="amazon.titan-tg1-large", client=client)
bedrock_llm = Bedrock(model_id="amazon.titan-tg1-large", client=client)
Set up standard langchain app with Bedrock LLM¶
In [ ]:
Copied!
template = "You are a helpful assistant."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, example_human, example_ai, human_message_prompt]
)
chain = LLMChain(llm=bedrock_llm, prompt=chat_prompt, verbose=True)
print(chain.run("What's the capital of the USA?"))
template = "You are a helpful assistant."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, example_human, example_ai, human_message_prompt]
)
chain = LLMChain(llm=bedrock_llm, prompt=chat_prompt, verbose=True)
print(chain.run("What's the capital of the USA?"))
Initialize Feedback Function(s)¶
In [ ]:
Copied!
from trulens.core import Feedback
from trulens.core import TruSession
from trulens.apps.langchain import TruChain
from trulens.providers.bedrock import Bedrock
session = TruSession()
session.reset_database()
from trulens.core import Feedback
from trulens.core import TruSession
from trulens.apps.langchain import TruChain
from trulens.providers.bedrock import Bedrock
session = TruSession()
session.reset_database()
In [ ]:
Copied!
# Initialize Huggingface-based feedback function collection class:
bedrock = Bedrock(model_id="amazon.titan-tg1-large", region_name="us-east-1")
# Define a language match feedback function using HuggingFace.
f_qa_relevance = Feedback(
bedrock.relevance_with_cot_reasons, name="Answer Relevance"
).on_input_output()
# By default this will check language match on the main app input and main app
# output.
# Initialize Huggingface-based feedback function collection class:
bedrock = Bedrock(model_id="amazon.titan-tg1-large", region_name="us-east-1")
# Define a language match feedback function using HuggingFace.
f_qa_relevance = Feedback(
bedrock.relevance_with_cot_reasons, name="Answer Relevance"
).on_input_output()
# By default this will check language match on the main app input and main app
# output.
Instrument chain for logging with TruLens¶
In [ ]:
Copied!
tru_recorder = TruChain(
chain, app_name="Chain1_ChatApplication", feedbacks=[f_qa_relevance]
)
tru_recorder = TruChain(
chain, app_name="Chain1_ChatApplication", feedbacks=[f_qa_relevance]
)
In [ ]:
Copied!
with tru_recorder as recording:
llm_response = chain.run("What's the capital of the USA?")
display(llm_response)
with tru_recorder as recording:
llm_response = chain.run("What's the capital of the USA?")
display(llm_response)
Explore in a Dashboard¶
In [ ]:
Copied!
from trulens.dashboard import run_dashboard
run_dashboard(session) # open a local streamlit app to explore
# stop_dashboard(session) # stop if needed
from trulens.dashboard import run_dashboard
run_dashboard(session) # open a local streamlit app to explore
# stop_dashboard(session) # stop if needed
Or view results directly in your notebook¶
In [ ]:
Copied!
session.get_records_and_feedback()[0]
session.get_records_and_feedback()[0]