maeser.graphs.pipeline_rag module#
Module for creating a pipeline Retrieval-Augmented Generation (RAG) graph using LangChain.
This RAG graph accepts multiple vector stores, allowing the chatbot to dynamically choose the most relevant vector store when answering a user’s question. However, only one vector store can be accessed per response.
Note: In almost all cases, universal_rag is a better option compared to pipeline_rag.
- maeser.graphs.pipeline_rag.get_pipeline_rag(vectorstore_config: Dict[str, str], memory_filepath: str, api_key: str | None = None, system_prompt_text: str = 'You are a helpful teacher helping a student with course material.\nYou will answer a question based on the context provided.\nIf the question is unrelated to the topic or the context, politely inform the user that their question is outside the context of your resources.\n\n{context}\n', model: str = 'gpt-4o-mini') langgraph.graph.graph.CompiledGraph [source]#
Creates a pipeline Retrieval-Augmented Generation (RAG) graph.
Note: In almost all cases, universal_rag.get_universal_rag() is a better option compared to pipeline_rag.get_pipeline_rag().
A pipeline RAG graph is a dynamic Retrieval-Augmented Generation (RAG) graph that includes topic extraction, conditional routing to retrieval nodes, and answer generation. The returned object is a compiled graph (with memory checkpoint).
This RAG graph accepts multiple vector stores, allowing the chatbot to dynamically choose the most relevant vector store when answering a user’s question. However, only one vector store can be accessed per response.
The following system prompt is used if none is provided:
“””You are a helpful teacher helping a student with course material. You will answer a question based on the context provided. If the question is unrelated to the topic or the context, politely inform the user that their question is outside the context of your resources.
{context} “””
- Parameters:
vectorstore_config (Dict[str, str]) –
Mapping of topic name to vector store path.
> WARNING: The topic name must be all lower case due to limitations with the current implementation.
memory_filepath (str) – Path for the memory checkpoint (SQLite database).
api_key (str | None) – API key for the language model. Defaults to None, in which case it will use the
OPENAI_API_KEY
environment variable.system_prompt_text (str) – System prompt template for answer generation. Defaults to a helpful teacher prompt.
model (str) – Model name to use. Defaults to ‘gpt-4o-mini’.
- Returns:
A compiled state graph (with memory checkpoint) ready for execution.
- Return type:
CompiledGraph