QA RAG II with self-assessment
This variation modifications the analysis process. Along with the query and reply pairs, we additionally cross the retrieved context to the evaluator LLM.
To perform this, add an itemgetter operate to the second RunnableParallel to gather the context string and cross it to the brand new qa_eval_prompt_with_context immediate template.
rag_chain = (
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
qa_eval_prompt_with_context |
llm_selfeval |
json_parser
)
Implementation flowchart:
One frequent drawback when utilizing chain implementations like LCEL is the problem in accessing intermediate variables. That is vital for pipeline debugging. We’ll have a look at some choices that will let you entry intermediate variables of curiosity utilizing LCEL operations.
Carry ahead intermediate output utilizing RunnableParallel
As we noticed earlier, RunnableParallel lets you advance a number of arguments to the subsequent step within the chain. So we use this function of RunnableParallel to hold the required intermediate values via to the tip.
The instance beneath modifies the unique self-evaluation RAG chain to output the obtained context textual content together with the ultimate self-evaluation output. The principle change is including a RunnableParallel object to every step of the method to take over the context variables.
Moreover, use the itemgetter operate to explicitly specify inputs for subsequent steps. For instance, for the final two RunnableParallel objects, use: itemgetter(‘enter’) Ensures that solely enter arguments from the earlier step are handed to the LLM/Json parser object.
rag_chain = (
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
RunnableParallel(enter = qa_eval_prompt, context = itemgetter("context")) |
RunnableParallel(enter = itemgetter("enter") | llm_selfeval , context = itemgetter("context") ) |
RunnableParallel(enter = itemgetter("enter") | json_parser, context = itemgetter("context") )
)
The output from this chain appears to be like like this:
A extra concise variation:
rag_chain = (
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
RunnableParallel(enter = qa_eval_prompt | llm_selfeval | json_parser, context = itemgetter("context"))
)
Use world variables to save lots of intermediate steps
This technique principally makes use of the logger precept. Introduces a brand new operate that saves the enter to a world variable. This enables intermediate variables to be accessed through world variables.
world contextdef save_context(x):
world context
context = x
return x
rag_chain = (
RunnableParallel(context = retriever | format_docs | save_context, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query") ) |
qa_eval_prompt |
llm_selfeval |
json_parser
)
Right here we outline a world variable referred to as. context a operate referred to as Save context Retailer enter values globally context Modify the variable earlier than returning the identical enter. within the chain, Save context Serves because the final step within the context acquisition step.
This feature lets you entry intermediate steps with out making main modifications to the chain.
Utilizing callbacks
Attaching callbacks to chains is one other frequent technique used to log intermediate variable values. There’s lots to cowl on the subject of LangChain callbacks, so I will be protecting this in additional element in one other submit.

