๐ LlamaIndex Quickstartยถ
In this quickstart you will create a simple Llama Index app and learn how to log it and get feedback on an LLM response.
You'll also learn how to use feedbacks for guardrails, via filtering retrieved context.
For evaluation, we will leverage the RAG triad of groundedness, context relevance and answer relevance.
# !pip install trulens trulens-apps-llamaindex trulens-providers-openai llama_index openai
Add API keysยถ
For this quickstart, you will need an Open AI key. The OpenAI key is used for embeddings, completion and evaluation.
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
Import from TruLensยถ
from trulens.core import TruSession
session = TruSession()
session.reset_database()
Download dataยถ
This example uses the text of Paul Grahamโs essay, โWhat I Worked Onโ, and is the canonical llama-index example.
The easiest way to get it is to download it via this link and save it in a folder called data. You can do so with the following command:
import os
import urllib.request
url = "https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt"
file_path = "data/paul_graham_essay.txt"
if not os.path.exists("data"):
os.makedirs("data")
if not os.path.exists(file_path):
urllib.request.urlretrieve(url, file_path)
Create Simple LLM Applicationยถ
This example uses LlamaIndex which internally uses an OpenAI LLM.
from llama_index.core import Settings
from llama_index.core import SimpleDirectoryReader
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
Settings.chunk_size = 128
Settings.chunk_overlap = 16
Settings.llm = OpenAI()
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(similarity_top_k=3)
Send your first requestยถ
response = query_engine.query("What did the author do growing up?")
print(response)
Initialize Feedback Function(s)ยถ
import numpy as np
from trulens.apps.llamaindex import TruLlama
from trulens.core import Feedback
from trulens.providers.openai import OpenAI
# Initialize provider class
provider = OpenAI()
# select context to be used in feedback. the location of context is app specific.
context = TruLlama.select_context(query_engine)
# Define a groundedness feedback function
f_groundedness = (
Feedback(
provider.groundedness_measure_with_cot_reasons, name="Groundedness"
)
.on(context.collect()) # collect context chunks into a list
.on_output()
)
# Question/answer relevance between overall question and answer.
f_answer_relevance = Feedback(
provider.relevance_with_cot_reasons, name="Answer Relevance"
).on_input_output()
# Question/statement relevance between question and each context chunk.
f_context_relevance = (
Feedback(
provider.context_relevance_with_cot_reasons, name="Context Relevance"
)
.on_input()
.on(context)
.aggregate(np.mean)
)
Instrument app for logging with TruLensยถ
tru_query_engine_recorder = TruLlama(
query_engine,
app_name="LlamaIndex_App",
app_version="base",
feedbacks=[f_groundedness, f_answer_relevance, f_context_relevance],
)
# or as context manager
with tru_query_engine_recorder as recording:
query_engine.query("What did the author do growing up?")
Use guardrailsยถ
In addition to making informed iteration, we can also directly use feedback results as guardrails at inference time. In particular, here we show how to use the context relevance score as a guardrail to filter out irrelevant context before it gets passed to the LLM. This both reduces hallucination and improves efficiency.
Below, you can see the TruLens feedback display of each context relevance chunk retrieved by our RAG.
from trulens.dashboard.display import get_feedback_result
last_record = recording.records[-1]
get_feedback_result(last_record, "Context Relevance")
Wouldn't it be great if we could automatically filter out context chunks with relevance scores below 0.5?
We can do so with the TruLens guardrail, WithFeedbackFilterNodes. All we have to do is use the method of_query_engine
to create a new filtered retriever, passing in the original retriever along with the feedback function and threshold we want to use.
from trulens.apps.llamaindex.guardrails import WithFeedbackFilterNodes
# note: feedback function used for guardrail must only return a score, not also reasons
f_context_relevance_score = Feedback(provider.context_relevance)
filtered_query_engine = WithFeedbackFilterNodes(
query_engine, feedback=f_context_relevance_score, threshold=0.5
)
Then we can operate as normal
tru_recorder = TruLlama(
filtered_query_engine,
app_name="LlamaIndex_App",
app_version="filtered",
feedbacks=[f_answer_relevance, f_context_relevance, f_groundedness],
)
with tru_recorder as recording:
llm_response = filtered_query_engine.query(
"What did the author do growing up?"
)
display(llm_response)
See the power of context filters!ยถ
If we inspect the context relevance of our retrieval now, you see only relevant context chunks!
from trulens.dashboard.display import get_feedback_result
last_record = recording.records[-1]
get_feedback_result(last_record, "Context Relevance")
session.get_leaderboard()
Retrieve records and feedbackยถ
# The record of the app invocation can be retrieved from the `recording`:
rec = recording.get() # use .get if only one record
# recs = recording.records # use .records if multiple
display(rec)
from trulens.dashboard import run_dashboard
run_dashboard(session)
# The results of the feedback functions can be rertireved from
# `Record.feedback_results` or using the `wait_for_feedback_result` method. The
# results if retrieved directly are `Future` instances (see
# `concurrent.futures`). You can use `as_completed` to wait until they have
# finished evaluating or use the utility method:
for feedback, feedback_result in rec.wait_for_feedback_results().items():
print(feedback.name, feedback_result.result)
# See more about wait_for_feedback_results:
# help(rec.wait_for_feedback_results)
records, feedback = session.get_records_and_feedback()
records.head()
session.get_leaderboard()
Explore in a Dashboardยถ
run_dashboard(session) # open a local streamlit app to explore
# stop_dashboard(session) # stop if needed
Alternatively, you can run trulens
from a command line in the same folder to start the dashboard.