Feedback Providers¶
TruLens constructs feedback functions by combining more general models, known as the feedback provider, and feedback implementation made up of carefully constructed prompts and custom logic tailored to perform a particular evaluation task.
This page documents the feedback providers available in TruLens.
There are three categories of such providers as well as provider combinations that make use of one or more of these providers to offer additional feedback function capabilities based on the constituent providers.
Classification-based Providers¶
Some feedback functions rely on classification typically tailor-made for evaluation tasks, unlike LLM models.
- Hugging Face provider containing a variety of classification-based feedback functions runnable on the remote HuggingFace API.
- Hugging Face Local provider containing a variety of classification-based feedback functions runnable locally.
- OpenAI provider (and subclasses) features moderation feedback functions.
Generation-based Providers¶
Providers which use large language models for feedback evaluation:
- OpenAI provider or AzureOpenAI provider
- Google provider
- Bedrock provider
- LiteLLM provider
- LangChain provider
Feedback functions common to these providers are found in the abstract class LLMProvider.
Using LiteLLM with a Custom Endpoint¶
The LiteLLM provider supports 100+ models through LiteLLM, including local models served by Ollama.
When connecting to a model served at a custom URL (e.g. a remote Ollama instance), there are three options:
Specifying a custom base URL
Pass api_base directly to the provider constructor:
from trulens.providers.litellm import LiteLLM
provider = LiteLLM(
model_engine="ollama/llama3.1:8b",
api_base="http://my-ollama-host:11434",
)
Set the provider-specific environment variable and litellm
will read it automatically. For Ollama, this is
OLLAMA_API_BASE:
import os
os.environ["OLLAMA_API_BASE"] = "http://my-ollama-host:11434"
from trulens.providers.litellm import LiteLLM
provider = LiteLLM(model_engine="ollama/llama3.1:8b")
See the litellm docs for the environment variable names for each provider.
Use completion_kwargs to pass any extra arguments to
litellm.completion():
from trulens.providers.litellm import LiteLLM
provider = LiteLLM(
model_engine="ollama/llama3.1:8b",
completion_kwargs={
"api_base": "http://my-ollama-host:11434",
},
)