RAG Sandbox

The RAG Sandbox is an interactive platform that allows you to conduct end-to-end testing with the vector indexes generated during your RAG (Retrieval-Augmented Generation) evaluation. It provides a hands-on environment to experiment with different embedding models, chunking strategies, and vector search indexes in real time.

Using the RAG Sandbox, you can:

  • Select Vector Search Index: Choose from multiple vectorization configurations (embedding model, chunk size, overlap, and dimensions) to see how different setups impact retrieval accuracy and relevance.

  • Adjust Prompts and Questions: You can input your own questions or use sample questions to interact with the vector database. The system behavior can be set based on your requirements, making it easy to test different retrieval and generation strategies.

  • View Results: When a question is submitted, the sandbox retrieves relevant context from the vector database and presents the resulting LLM-generated response. The interface provides key metrics such as relevancy, NDCG, and cosine similarity to help you evaluate how well the retrieval and generation worked.

This sandbox is essential for optimizing your RAG process by allowing you to iteratively refine both the vectorization strategy and the retrieval model. For a more detailed guide on how to use the RAG Sandbox, please refer to the RAG Sandbox Usage Guide.

Last updated