Skip to content

OpenAI APIs

Below is how you can instantiate OpenAI as a provider, along with feedback functions available only from OpenAI.

Additionally, all feedback functions listed in the base LLMProvider class can be run with OpenAI.

Bases: LLMProvider

Out of the box feedback functions calling OpenAI APIs.

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
class OpenAI(LLMProvider):
    """Out of the box feedback functions calling OpenAI APIs.
    """
    # model_engine: str # LLMProvider

    endpoint: Endpoint

    def __init__(
        self, *args, endpoint=None, model_engine="gpt-3.5-turbo", **kwargs
    ):
        # NOTE(piotrm): pydantic adds endpoint to the signature of this
        # constructor if we don't include it explicitly, even though we set it
        # down below. Adding it as None here as a temporary hack.
        """
        Create an OpenAI Provider with out of the box feedback functions.

        **Usage:**
        ```python
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()
        ```

        Args:
            model_engine (str): The OpenAI completion model. Defaults to
                `gpt-3.5-turbo`
            endpoint (Endpoint): Internal Usage for DB serialization
        """
        # TODO: why was self_kwargs required here independently of kwargs?
        self_kwargs = dict()
        self_kwargs.update(**kwargs)
        self_kwargs['model_engine'] = model_engine
        self_kwargs['endpoint'] = OpenAIEndpoint(*args, **kwargs)

        super().__init__(
            **self_kwargs
        )  # need to include pydantic.BaseModel.__init__

    # LLMProvider requirement
    def _create_chat_completion(
        self,
        prompt: Optional[str] = None,
        messages: Optional[Sequence[Dict]] = None,
        **kwargs
    ) -> str:

        if 'model' not in kwargs:
            kwargs['model'] = self.model_engine

        if 'temperature' not in kwargs:
            kwargs['temperature'] = 0.0

        if 'seed' not in kwargs:
            kwargs['seed'] = 123

        if prompt is not None:
            completion = self.endpoint.client.chat.completions.create(
                messages=[{
                    "role": "system",
                    "content": prompt
                }], **kwargs
            )
        elif messages is not None:
            completion = self.endpoint.client.chat.completions.create(
                messages=messages, **kwargs
            )

        else:
            raise ValueError("`prompt` or `messages` must be specified.")

        return completion.choices[0].message.content

    def _moderation(self, text: str):
        # See https://platform.openai.com/docs/guides/moderation/overview .
        moderation_response = self.endpoint.run_me(
            lambda: self.endpoint.client.moderations.create(input=text)
        )
        return moderation_response.results[0]

    # TODEP
    def moderation_hate(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is hate
        speech.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_hate, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not hate) and 1.0 (hate).
        """
        openai_response = self._moderation(text)
        return float(openai_response.category_scores.hate)

    # TODEP
    def moderation_hatethreatening(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is
        threatening speech.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_hatethreatening, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not threatening) and 1.0 (threatening).
        """
        openai_response = self._moderation(text)

        return float(openai_response.category_scores.hate_threatening)

    # TODEP
    def moderation_selfharm(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is about
        self harm.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_selfharm, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not self harm) and 1.0 (self harm).
        """
        openai_response = self._moderation(text)

        return float(openai_response.category_scores.self_harm)

    # TODEP
    def moderation_sexual(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is sexual
        speech.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_sexual, higher_is_better=False
        ).on_output()
        ```
        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not sexual) and 1.0 (sexual).
        """
        openai_response = self._moderation(text)

        return float(openai_response.category_scores.sexual)

    # TODEP
    def moderation_sexualminors(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is about
        sexual minors.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_sexualminors, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not sexual minors) and 1.0 (sexual
            minors).
        """

        openai_response = self._moderation(text)

        return float(oopenai_response.category_scores.sexual_minors)

    # TODEP
    def moderation_violence(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is about
        violence.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_violence, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not violence) and 1.0 (violence).
        """
        openai_response = self._moderation(text)

        return float(openai_response.category_scores.violence)

    # TODEP
    def moderation_violencegraphic(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is about
        graphic violence.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_violencegraphic, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not graphic violence) and 1.0 (graphic
            violence).
        """
        openai_response = self._moderation(text)

        return float(openai_response.category_scores.violence_graphic)

    # TODEP
    def moderation_harassment(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is about
        graphic violence.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_harassment, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not harrassment) and 1.0 (harrassment).
        """
        openai_response = self._moderation(text)

        return float(openai_response.category_scores.harassment)

    def moderation_harassment_threatening(self, text: str) -> float:
        """
        Uses OpenAI's Moderation API. A function that checks if text is about
        graphic violence.

        **Usage:**
        ```python
        from trulens_eval import Feedback
        from trulens_eval.feedback.provider.openai import OpenAI
        openai_provider = OpenAI()

        feedback = Feedback(
            openai_provider.moderation_harassment_threatening, higher_is_better=False
        ).on_output()
        ```

        The `on_output()` selector can be changed. See [Feedback Function
        Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

        Args:
            text (str): Text to evaluate.

        Returns:
            float: A value between 0.0 (not harrassment/threatening) and 1.0 (harrassment/threatening).
        """
        openai_response = self._moderation(text)

        return float(openai_response.category_scores.harassment)

__init__(*args, endpoint=None, model_engine='gpt-3.5-turbo', **kwargs)

Create an OpenAI Provider with out of the box feedback functions.

Usage:

from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

Parameters:

Name Type Description Default
model_engine str

The OpenAI completion model. Defaults to gpt-3.5-turbo

'gpt-3.5-turbo'
endpoint Endpoint

Internal Usage for DB serialization

None
Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def __init__(
    self, *args, endpoint=None, model_engine="gpt-3.5-turbo", **kwargs
):
    # NOTE(piotrm): pydantic adds endpoint to the signature of this
    # constructor if we don't include it explicitly, even though we set it
    # down below. Adding it as None here as a temporary hack.
    """
    Create an OpenAI Provider with out of the box feedback functions.

    **Usage:**
    ```python
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()
    ```

    Args:
        model_engine (str): The OpenAI completion model. Defaults to
            `gpt-3.5-turbo`
        endpoint (Endpoint): Internal Usage for DB serialization
    """
    # TODO: why was self_kwargs required here independently of kwargs?
    self_kwargs = dict()
    self_kwargs.update(**kwargs)
    self_kwargs['model_engine'] = model_engine
    self_kwargs['endpoint'] = OpenAIEndpoint(*args, **kwargs)

    super().__init__(
        **self_kwargs
    )  # need to include pydantic.BaseModel.__init__

moderation_harassment(text)

Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_harassment, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not harrassment) and 1.0 (harrassment).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
def moderation_harassment(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is about
    graphic violence.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_harassment, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not harrassment) and 1.0 (harrassment).
    """
    openai_response = self._moderation(text)

    return float(openai_response.category_scores.harassment)

moderation_harassment_threatening(text)

Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_harassment_threatening, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not harrassment/threatening) and 1.0 (harrassment/threatening).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
def moderation_harassment_threatening(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is about
    graphic violence.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_harassment_threatening, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not harrassment/threatening) and 1.0 (harrassment/threatening).
    """
    openai_response = self._moderation(text)

    return float(openai_response.category_scores.harassment)

moderation_hate(text)

Uses OpenAI's Moderation API. A function that checks if text is hate speech.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_hate, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not hate) and 1.0 (hate).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
def moderation_hate(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is hate
    speech.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_hate, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not hate) and 1.0 (hate).
    """
    openai_response = self._moderation(text)
    return float(openai_response.category_scores.hate)

moderation_hatethreatening(text)

Uses OpenAI's Moderation API. A function that checks if text is threatening speech.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_hatethreatening, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not threatening) and 1.0 (threatening).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
def moderation_hatethreatening(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is
    threatening speech.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_hatethreatening, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not threatening) and 1.0 (threatening).
    """
    openai_response = self._moderation(text)

    return float(openai_response.category_scores.hate_threatening)

moderation_selfharm(text)

Uses OpenAI's Moderation API. A function that checks if text is about self harm.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_selfharm, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not self harm) and 1.0 (self harm).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
def moderation_selfharm(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is about
    self harm.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_selfharm, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not self harm) and 1.0 (self harm).
    """
    openai_response = self._moderation(text)

    return float(openai_response.category_scores.self_harm)

moderation_sexual(text)

Uses OpenAI's Moderation API. A function that checks if text is sexual speech.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_sexual, higher_is_better=False
).on_output()
The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not sexual) and 1.0 (sexual).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
def moderation_sexual(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is sexual
    speech.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_sexual, higher_is_better=False
    ).on_output()
    ```
    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not sexual) and 1.0 (sexual).
    """
    openai_response = self._moderation(text)

    return float(openai_response.category_scores.sexual)

moderation_sexualminors(text)

Uses OpenAI's Moderation API. A function that checks if text is about sexual minors.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_sexualminors, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not sexual minors) and 1.0 (sexual

float

minors).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
def moderation_sexualminors(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is about
    sexual minors.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_sexualminors, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not sexual minors) and 1.0 (sexual
        minors).
    """

    openai_response = self._moderation(text)

    return float(oopenai_response.category_scores.sexual_minors)

moderation_violence(text)

Uses OpenAI's Moderation API. A function that checks if text is about violence.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_violence, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not violence) and 1.0 (violence).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
def moderation_violence(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is about
    violence.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_violence, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not violence) and 1.0 (violence).
    """
    openai_response = self._moderation(text)

    return float(openai_response.category_scores.violence)

moderation_violencegraphic(text)

Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.

Usage:

from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()

feedback = Feedback(
    openai_provider.moderation_violencegraphic, higher_is_better=False
).on_output()

The on_output() selector can be changed. See Feedback Function Guide

Parameters:

Name Type Description Default
text str

Text to evaluate.

required

Returns:

Name Type Description
float float

A value between 0.0 (not graphic violence) and 1.0 (graphic

float

violence).

Source code in trulens_eval/trulens_eval/feedback/provider/openai.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
def moderation_violencegraphic(self, text: str) -> float:
    """
    Uses OpenAI's Moderation API. A function that checks if text is about
    graphic violence.

    **Usage:**
    ```python
    from trulens_eval import Feedback
    from trulens_eval.feedback.provider.openai import OpenAI
    openai_provider = OpenAI()

    feedback = Feedback(
        openai_provider.moderation_violencegraphic, higher_is_better=False
    ).on_output()
    ```

    The `on_output()` selector can be changed. See [Feedback Function
    Guide](https://www.trulens.org/trulens_eval/feedback_function_guide/)

    Args:
        text (str): Text to evaluate.

    Returns:
        float: A value between 0.0 (not graphic violence) and 1.0 (graphic
        violence).
    """
    openai_response = self._moderation(text)

    return float(openai_response.category_scores.violence_graphic)