Skip to content

OpenAI Provider

Below is how you can instantiate OpenAI as a provider, along with feedback functions available only from OpenAI.

Additionally, all feedback functions listed in the base LLMProvider class can be run with OpenAI.

trulens_eval.feedback.provider.openai.OpenAI

Bases: LLMProvider

Out of the box feedback functions calling OpenAI APIs.

Create an OpenAI Provider with out of the box feedback functions.

Example

from trulens_eval.feedback.provider.openai import OpenAI 
openai_provider = OpenAI()
PARAMETER DESCRIPTION
model_engine

The OpenAI completion model. Defaults to gpt-3.5-turbo

TYPE: Optional[str] DEFAULT: None

**kwargs

Additional arguments to pass to the OpenAIEndpoint which are then passed to OpenAIClient and finally to the OpenAI client.

TYPE: dict DEFAULT: {}

Functions

moderation_hate

moderation_hate(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is hate speech.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_hate, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not hate) and 1.0 (hate).

TYPE: float

moderation_hatethreatening

moderation_hatethreatening(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is threatening speech.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_hatethreatening, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not threatening) and 1.0 (threatening).

TYPE: float

moderation_selfharm

moderation_selfharm(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is about self harm.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_selfharm, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not self harm) and 1.0 (self harm).

TYPE: float

moderation_sexual

moderation_sexual(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is sexual speech.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_sexual, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not sexual) and 1.0 (sexual).

TYPE: float

moderation_sexualminors

moderation_sexualminors(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is about sexual minors.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_sexualminors, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not sexual minors) and 1.0 (sexual

TYPE: float

float

minors).

moderation_violence

moderation_violence(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is about violence.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_violence, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not violence) and 1.0 (violence).

TYPE: float

moderation_violencegraphic

moderation_violencegraphic(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_violencegraphic, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not graphic violence) and 1.0 (graphic

TYPE: float

float

violence).

moderation_harassment

moderation_harassment(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_harassment, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not harrassment) and 1.0 (harrassment).

TYPE: float

moderation_harassment_threatening

moderation_harassment_threatening(text: str) -> float

Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.

Example

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_harassment_threatening, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

PARAMETER DESCRIPTION
text

Text to evaluate.

TYPE: str

RETURNS DESCRIPTION
float

A value between 0.0 (not harrassment/threatening) and 1.0 (harrassment/threatening).

TYPE: float