LiteLLM APIs
Below is how you can instantiate LiteLLM as a provider. LiteLLM supports 100+ models from OpenAI, Cohere, Anthropic, HuggingFace, Meta and more. You can find more information about models available here.
All feedback functions listed in the base LLMProvider
class can be run with LiteLLM.
Bases: LLMProvider
Out of the box feedback functions calling LiteLLM API.
Source code in trulens_eval/trulens_eval/feedback/provider/litellm.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74 | class LiteLLM(LLMProvider):
"""Out of the box feedback functions calling LiteLLM API.
"""
model_engine: str
endpoint: Endpoint
def __init__(
self, *args, endpoint=None, model_engine="gpt-3.5-turbo", **kwargs
):
# NOTE(piotrm): pydantic adds endpoint to the signature of this
# constructor if we don't include it explicitly, even though we set it
# down below. Adding it as None here as a temporary hack.
"""
Create an LiteLLM Provider with out of the box feedback functions.
**Usage:**
```
from trulens_eval.feedback.provider.litellm import LiteLLM
litellm_provider = LiteLLM()
```
Args:
model_engine (str): The LiteLLM completion model.Defaults to `gpt-3.5-turbo`
endpoint (Endpoint): Internal Usage for DB serialization
"""
# TODO: why was self_kwargs required here independently of kwargs?
self_kwargs = dict()
self_kwargs.update(**kwargs)
self_kwargs['model_engine'] = model_engine
self_kwargs['endpoint'] = LiteLLMEndpoint(*args, **kwargs)
super().__init__(
**self_kwargs
) # need to include pydantic.BaseModel.__init__
def _create_chat_completion(
self,
prompt: Optional[str] = None,
messages: Optional[Sequence[Dict]] = None,
**kwargs
) -> str:
from litellm import completion
if prompt is not None:
comp = completion(
model=self.model_engine,
messages=[{
"role": "system",
"content": prompt
}],
**kwargs
)
elif messages is not None:
comp = completion(
model=self.model_engine, messages=messages, **kwargs
)
else:
raise ValueError("`prompt` or `messages` must be specified.")
assert isinstance(comp, dict)
return comp["choices"][0]["message"]["content"]
|
__init__(*args, endpoint=None, model_engine='gpt-3.5-turbo', **kwargs)
Create an LiteLLM Provider with out of the box feedback functions.
Usage:
from trulens_eval.feedback.provider.litellm import LiteLLM
litellm_provider = LiteLLM()
Parameters:
Name |
Type |
Description |
Default |
model_engine |
str
|
The LiteLLM completion model.Defaults to gpt-3.5-turbo |
'gpt-3.5-turbo'
|
endpoint |
Endpoint
|
Internal Usage for DB serialization |
None
|
Source code in trulens_eval/trulens_eval/feedback/provider/litellm.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45 | def __init__(
self, *args, endpoint=None, model_engine="gpt-3.5-turbo", **kwargs
):
# NOTE(piotrm): pydantic adds endpoint to the signature of this
# constructor if we don't include it explicitly, even though we set it
# down below. Adding it as None here as a temporary hack.
"""
Create an LiteLLM Provider with out of the box feedback functions.
**Usage:**
```
from trulens_eval.feedback.provider.litellm import LiteLLM
litellm_provider = LiteLLM()
```
Args:
model_engine (str): The LiteLLM completion model.Defaults to `gpt-3.5-turbo`
endpoint (Endpoint): Internal Usage for DB serialization
"""
# TODO: why was self_kwargs required here independently of kwargs?
self_kwargs = dict()
self_kwargs.update(**kwargs)
self_kwargs['model_engine'] = model_engine
self_kwargs['endpoint'] = LiteLLMEndpoint(*args, **kwargs)
super().__init__(
**self_kwargs
) # need to include pydantic.BaseModel.__init__
|