Skip to content

AWS Bedrock Provider

Below is how you can instantiate AWS Bedrock as a provider. Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case

All feedback functions listed in the base LLMProvider class can be run with AWS Bedrock.

trulens_eval.feedback.provider.bedrock.Bedrock

Bases: LLMProvider

A set of AWS Feedback Functions.

Parameters:

  • model_id (str, optional): The specific model id. Defaults to "amazon.titan-text-express-v1".

  • All other args/kwargs passed to BedrockEndpoint and subsequently to boto3 client constructor.

Functions

generate_score

generate_score(system_prompt: str, user_prompt: Optional[str] = None, normalize: float = 10.0) -> float

Base method to generate a score only, used for evaluation.

PARAMETER DESCRIPTION
system_prompt

A pre-formatted system prompt.

TYPE: str

user_prompt

An optional user prompt. Defaults to None.

TYPE: Optional[str] DEFAULT: None

normalize

The normalization factor for the score. Defaults to 10.0.

TYPE: float DEFAULT: 10.0

temperature

The temperature for the LLM response. Defaults to 0.0.

TYPE: float

RETURNS DESCRIPTION
float

The score on a 0-1 scale.

TYPE: float

generate_score_and_reasons

generate_score_and_reasons(system_prompt: str, user_prompt: Optional[str] = None, normalize: float = 10.0) -> Union[float, Tuple[float, Dict]]

Base method to generate a score and reason, used for evaluation.

PARAMETER DESCRIPTION
system_prompt

A pre-formatted system prompt.

TYPE: str

user_prompt

An optional user prompt. Defaults to None.

TYPE: Optional[str] DEFAULT: None

normalize

The normalization factor for the score. Defaults to 10.0.

TYPE: float DEFAULT: 10.0

temperature

The temperature for the LLM response. Defaults to 0.0.

TYPE: float

RETURNS DESCRIPTION
Union[float, Tuple[float, Dict]]

Tuple[float, Dict]: The score on a 0-1 scale and reason metadata (dict) if returned by the LLM.