AWS Bedrock Provider¶
Below is how you can instantiate AWS Bedrock as a provider. Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case
All feedback functions listed in the base LLMProvider class can be run with AWS Bedrock.
trulens_eval.feedback.provider.bedrock.Bedrock
¶
Bases: LLMProvider
A set of AWS Feedback Functions.
Parameters:
-
model_id (str, optional): The specific model id. Defaults to "amazon.titan-text-express-v1".
-
All other args/kwargs passed to BedrockEndpoint and subsequently to boto3 client constructor.
Functions¶
generate_score
¶
generate_score(system_prompt: str, user_prompt: Optional[str] = None, normalize: float = 10.0, temperature: float = 0.0) -> float
Base method to generate a score only, used for evaluation.
PARAMETER | DESCRIPTION |
---|---|
system_prompt |
A pre-formatted system prompt.
TYPE:
|
user_prompt |
An optional user prompt. |
normalize |
The normalization factor for the score.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
The score on a 0-1 scale. |
generate_score_and_reasons
¶
generate_score_and_reasons(system_prompt: str, user_prompt: Optional[str] = None, normalize: float = 10.0, temperature: float = 0.0) -> Union[float, Tuple[float, Dict]]
Base method to generate a score and reason, used for evaluation.
PARAMETER | DESCRIPTION |
---|---|
system_prompt |
A pre-formatted system prompt.
TYPE:
|
user_prompt |
An optional user prompt. |
normalize |
The normalization factor for the score.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Union[float, Tuple[float, Dict]]
|
The score on a 0-1 scale. |
Union[float, Tuple[float, Dict]]
|
Reason metadata if returned by the LLM. |