Create a RAG Pipeline with Pinecone
Last updated
Last updated
Approximate time to complete: 5-10 minutes, excluding prerequisites
This quickstart will walk you through creating and scheduling a pipeline that collects data from an Amazon S3 bucket, creates vector embeddings using an OpenAI embedding model, and writes the vectors to your Pinecone search index.
Before starting, ensure you have access to the credentials, connection parameters, and API keys as appropriate for the following:
A Vectorize account (Create one free here ↗ )
An Amazon S3 bucket & IAM access keys (How to article)
An OpenAI API Key (How to article)
A Pinecone account (Create one on Pinecone ↗ )
Navigate to the Pinecone application console ↗.
Go to the Indexes section from the left sidebar, under Database. Click the Create Index button on the top right.
In the Create a new index page, enter the following details:
Index Name: Enter the name of your index (e.g., my-test-index
).
Dimensions: Set the dimension size to 1536, for the quickstart we'll use the OpenAI v3 small embedding model at 1536 dimensions.
Metric: Select the similarity metric (e.g., cosine
).
Choose Severless for the Capacity mode (the default option), then click Create Index to complete the process.
Once the index is created, you'll be redirected to the index overview page. Click on the API keys item in the left menu.
To configure an integration with Vectorize, you'll need an API key. Click Create API key.
In the Create a new API key window, enter a name for the key (e.g., testkey
) and click Create API key.
After creating the key, click the copy icon next to the key to copy it and store it safely. You'll need it for accessing the index through the API.
Open the Vectorize Application Console ↗
From the dashboard, click on + New RAG Pipeline
under the "RAG Pipelines" section.
Enter a name for your pipeline. For example, you can name it quickstart-pipeline
. Then Click on + New Vector DB
to create a new vector database.
Select Pinecone from the list of vector databases.
In the Pinecone configuration screen, enter a descriptive name for the Pinecone integration, then follow the desired authentication approach.
There are two options for configuring your Pinecone integration:
Use your Pinecone API key.
Use Pinecone Connect.
Authenticate with your Pinecone API key
Finding Required Information in Pinecone
Enter the integration name and your Pinecone API key, then click Create Pinecone Integration.
To find your Pinecone API Key:
Log in to your Pinecone Console.
Navigate to the API Keys section.
Copy your API key or generate a new one if needed.
Authenticate via Pinecone
Enter the integration name, then click Authenticate with Pinecone.
Log in to your Pinecone account.
Confirm the organization and project, then click Authorize.
Click on + New AI Platform
.
Select OpenAI from the AI platform options.
In the OpenAI configuration screen:
Enter a descriptive name for your OpenAI integration.
Enter your OpenAI API Key.
Leave the default values for embedding model, chunk size, and chunk overlap for the quickstart. Then click Next: Source Connector(s) to continue.
Click + Add source connector
to add a source connector to your pipeline.
Choose Amazon S3 from the list of source connector options.
In the Amazon S3 configuration screen:
Name your integration. It can be the same as your bucket name, but it doesn't have to be.
Enter your Bucket Name exactly as it appears in AWS.
Provide the Access Key and Secret Key for your AWS IAM user.
Accept the default values for file extensions and other options.
Click Save Configuration.
After configuring the S3 integration, you should see it listed under Source Connectors.
Click Next: Schedule Pipeline to continue.
Set the schedule type and frequency for the pipeline.
Leave the default values for the pipeline schedule for now.
Click Create RAG Pipeline.
After clicking Create RAG Pipeline, you will see the pipeline creation progress.
The stages include:
Creating pipeline
Deploying pipeline
Starting backfilling process
Once the pipeline is created and deployed, it will begin the backfilling process.
You can monitor the pipeline status and view the progress of document ingestion and vector creation.
If your S3 bucket is empty, the pipeline will show 0 Documents
, 0 Chunks
, and 0 Vectors
.
Download the friends-scripts.zip
file from the following location:
After downloading the friends-scripts.zip
file, extract it to a location on your local machine.
On most operating systems, you can do this by right-clicking the zip file and selecting Extract All or Unzip.
Log into your AWS S3 account and navigate to the Buckets section.
Filter to find your bucket by typing its name in the search bar.
Click on your bucket name to open the detailed bucket view.
Click on the Upload button in the top right corner of the bucket's detail view.
You can either drag and drop the extracted files from the friends-scripts
directory into the upload area, or click on Add files to browse your local machine and select them manually.
After adding the files, you should see them listed under the Files and folders section of the upload screen.
Once you've confirmed that all the files are listed, click on the Upload button at the bottom of the screen to start the upload process.
Your files will now be uploaded to your S3 bucket.
Within a few seconds after the upload is complete, you should see the content of your files start to populate in the RAG pipeline.
The backfilling process will show progress as it reads and processes the documents from your S3 bucket.
Total Documents and Total Chunks will increase as the documents are embedded and processed.
You can track the number of documents being embedded and vectors being written.
After a minute or two of processing, you should see the total number of uploaded documents reflected in the pipeline's statistics.
If you used the Friends Scripts documents as recommened, you will see 228 documents displayed in the Total Documents field.
From the main pipeline overview, click on the RAG Pipelines menu item to view your active pipelines.
Find your pipeline in the list of pipelines.
Click on the magnifying glass icon under the RAG Sandbox column to open the sandbox for your selected pipeline.
In the sandbox, you can ask questions about the data you've ingested.
Type a question related to your dataset in the Question field. For example, "What characteristics define the relationship between Ross and Monica?" if you're working with the Friends TV show scripts.
Click Submit to send the question.
After submitting your question, the sandbox will retrieve relevant chunks from your vector database and display them in the Retrieved Context section.
The response from the language model (LLM) will be displayed in the LLM Response section.
The Retrieved Context section shows the chunks that were matched with your question.
The LLM Response section provides the final output based on the retrieved chunks.
You can continue to ask different questions or refine your queries to explore your dataset further.
The sandbox allows for dynamic interactions with the data stored in your vector database.
That's it! You're now able to explore your data using the RAG Sandbox.
Click Create Pinecone Integration.