trulens.core.guardrails.base¶
trulens.core.guardrails.base
¶
Classes¶
context_filter
¶
Provides a decorator to filter contexts based on a given feedback and threshold.
PARAMETER | DESCRIPTION |
---|---|
feedback |
The feedback object to use for filtering.
TYPE:
|
threshold |
The minimum feedback value required for a context to be included.
TYPE:
|
keyword_for_prompt |
Keyword argument to decorator to use for prompt. |
Example
from trulens.core.guardrails.base import context_filter
feedback = Feedback(provider.context_relevance, name="Context Relevance")
class RAG_from_scratch:
...
@context_filter(feedback, 0.5, "query")
def retrieve(self, *, query: str) -> list:
results = vector_store.query(
query_texts=query,
n_results=3
)
return [doc for sublist in results['documents'] for doc in sublist]
...
block_input
¶
Provides a decorator to block input based on a given feedback and threshold.
PARAMETER | DESCRIPTION |
---|---|
feedback |
The feedback object to use for blocking.
TYPE:
|
threshold |
The minimum feedback value required for a context to be included.
TYPE:
|
keyword_for_prompt |
Keyword argument to decorator to use for prompt. |
return_value |
The value to return if the input is blocked. Defaults to None. |
Example
from trulens.core.guardrails.base import block_input
feedback = Feedback(provider.criminality, higher_is_better = False)
class safe_input_chat_app:
@instrument
@block_input(feedback=feedback,
threshold=0.9,
keyword_for_prompt="question",
return_value="I couldn't find an answer to your question.")
def generate_completion(self, question: str) -> str:
completion = (
oai_client.chat.completions.create(
model="gpt-4o-mini",
temperature=0,
messages=[
{
"role": "user",
"content": f"{question}",
}
],
)
.choices[0]
.message.content
)
return completion
block_output
¶
Provides a decorator to block output based on a given feedback and threshold.
PARAMETER | DESCRIPTION |
---|---|
feedback |
The feedback object to use for blocking. It must only take a single argument.
TYPE:
|
threshold |
The minimum feedback value required for a context to be included.
TYPE:
|
return_value |
The value to return if the input is blocked. Defaults to None. |
Example
from trulens.core.guardrails.base import block_output
feedback = Feedback(provider.criminality, higher_is_better = False)
class safe_output_chat_app:
@instrument
@block_output(feedback = feedback,
threshold = 0.5,
return_value = "Sorry, I couldn't find an answer to your question.")
def chat(self, question: str) -> str:
completion = (
oai_client.chat.completions.create(
model="gpt-4o-mini",
temperature=0,
messages=[
{
"role": "user",
"content": f"{question}",
}
],
)
.choices[0]
.message.content
)
return completion