Skip to main content

Build Your First Retrieval Pipeline for LLMs and Agents

In this guide, you'll build a retrieval pipeline that can provide your connected LLM or agent framework with structured, document-based context. You'll upload documents, create a pipeline with retrieval capabilities, and connect it to an LLM — all with runnable code examples.

What You'll Build

By the end of this guide, you'll have:

  • An agent-ready pipeline that transforms your content into structured context
  • A chatbot that can answer complex questions about your content via your connected LLM
  • Familiarity with core Vectorize concepts

Prerequisites

Before you begin, you'll need:

  1. A Vectorize account
  2. An API access token (how to create one)
  3. Your organization ID (see below)

Finding your Organization ID

Your organization ID is in the Vectorize platform URL:

https://platform.vectorize.io/organization/[YOUR-ORG-ID]

For example, if your URL is:

https://platform.vectorize.io/organization/ecf3fa1d-30d0-4df1-8af6-f4852bc851cb

Your organization ID is: ecf3fa1d-30d0-4df1-8af6-f4852bc851cb

API Client Setup

import vectorize_client as v
import os

# Get credentials from environment variables
organization_id = os.environ.get("VECTORIZE_ORGANIZATION_ID")
api_key = os.environ.get("VECTORIZE_API_KEY")

if not organization_id or not api_key:
raise ValueError("Please set VECTORIZE_ORGANIZATION_ID and VECTORIZE_API_KEY environment variables")

# Initialize the API client
configuration = v.Configuration(
host="https://api.vectorize.io",
api_key={"ApiKeyAuth": api_key}
)
api = v.ApiClient(configuration)

print(f"✅ API client initialized for organization: {organization_id}")

How your LLM (or agent) uses your data

Retrieval-Augmented Generation (RAG) provides the foundation that enables your LLM (and any agent framework you use) to access and use your specific data. Instead of relying solely on general knowledge, agents powered by RAG can:

  1. Access your documents through intelligent retrieval
  2. Use structured context to interpret relationships within your content
  3. Support reasoning across multiple sources via the connected LLM
  4. Generate informed responses grounded in your actual data

This transforms AI from a general-purpose tool into an intelligent agent workflow that uses your organization's knowledge to provide more relevant, grounded responses.

Step 1: Create a File Upload Connector

A source connector is how you get data into Vectorize. For this guide, we'll use a File Upload connector to upload documents directly:

import vectorize_client as v

# Create the connectors API client
connectors_api = v.SourceConnectorsApi(apiClient)

try:
# Create a file upload connector
file_upload = v.FileUpload(
name="my-document-upload",
type="FILE_UPLOAD",
config={}
)

request = v.CreateSourceConnectorRequest(file_upload)
response = connectors_api.create_source_connector(
organization_id,
request
)

connector_id = response.connector.id
print(f"✅ Created file upload connector: {connector_id}")

except Exception as e:
print(f"❌ Error creating connector: {e}")
raise

Step 2: Upload Your First Document

Now let's upload a document. In this example, we're uploading a simple .txt file. You can upload PDFs, Word docs, or any supported text format - the upload process is the same regardless of file type.

import vectorize_client as v
import os
import urllib3

# Create uploads API client
uploads_api = v.UploadsApi(apiClient)

try:
# Step 1: Get upload URL
upload_request = v.StartFileUploadToConnectorRequest(
name=file_name,
content_type="text/plain"
)

start_response = uploads_api.start_file_upload_to_connector(
organization_id,
source_connector_id,
start_file_upload_to_connector_request=upload_request
)

# Step 2: Upload file to the URL
http = urllib3.PoolManager()
with open(file_path, "rb") as f:
response = http.request(
"PUT",
start_response.upload_url,
body=f,
headers={
"Content-Type": "text/plain",
"Content-Length": str(os.path.getsize(file_path))
}
)

if response.status == 200:
print(f"✅ Successfully uploaded: {file_name}")
else:
print(f"❌ Upload failed: {response.status}")

except Exception as e:
print(f"❌ Error uploading file: {e}")
raise

Step 3: Create Your Pipeline

A pipeline transforms your raw documents into structured context that your connected LLM or agent can use for retrieval and answering. Vectorize provides built-in processing and vector storage to enable agent capabilities:

import vectorize_client as v

# Create pipelines API client
pipelines_api = v.PipelinesApi(apiClient)

try:
# Configure your pipeline
pipeline_config = v.PipelineConfigurationSchema(
pipeline_name="My First Pipeline",
source_connectors=[
v.PipelineSourceConnectorSchema(
id=source_connector_id,
type="FILE_UPLOAD",
config={}
)
],
ai_platform_connector=v.PipelineAIPlatformConnectorSchema(
id=ai_platform_connector_id, # Uses Vectorize's built-in AI
type="VECTORIZE",
config={}
),
destination_connector=v.PipelineDestinationConnectorSchema(
id=destination_connector_id, # Uses Vectorize's built-in vector store
type="VECTORIZE",
config={}
),
schedule=v.ScheduleSchema(type="manual")
)

# Create the pipeline
response = pipelines_api.create_pipeline(
organization_id,
pipeline_config
)

pipeline_id = response.data.id
print(f"✅ Created pipeline: {pipeline_id}")

except Exception as e:
print(f"❌ Error creating pipeline: {e}")
raise

What's Happening Here?

When you create a pipeline, you’re building the infrastructure your connected LLM or agent will use for retrieval and context.

  1. Source Connector: Feeds documents into your pipeline’s retrieval index
  2. AI Platform Connector: Converts documents into vector embeddings and structured metadata for retrieval
  3. Destination Connector: Maintains structured, queryable indexes for retrieval (Vectorize's built-in vector store or an external destination)
  4. Schedule: Controls when your pipeline’s data is refreshed. Changes to source content trigger automatic reprocessing.

This pipeline enables your LLM to not just locate relevant information, but to use richer context for grounded answers.

Step 4: Wait for Processing

Your pipeline needs a few moments to process the uploaded document. Let's monitor its progress:

import vectorize_client as v
import time

# Create pipelines API client
pipelines_api = v.PipelinesApi(apiClient)

print("Waiting for pipeline to process your document...")
max_wait_time = 300 # 5 minutes
start_time = time.time()

while True:
try:
# Check pipeline status
pipeline = pipelines_api.get_pipeline(organization_id, pipeline_id)

status = pipeline.data.status

# Check if ready
if status == "LISTENING":
print("✅ Pipeline is ready!")
break
elif status == "PROCESSING":
print("⚙️ Still processing...")
elif status in ["ERROR_DEPLOYING", "SHUTDOWN"]:
print(f"❌ Pipeline error: {status}")
break

# Check timeout
if time.time() - start_time > max_wait_time:
print("⏰ Timeout waiting for pipeline")
break

time.sleep(10) # Check every 10 seconds

except Exception as e:
print(f"❌ Error checking status: {e}")
break

Pipeline States

  • DEPLOYING: Pipeline is being set up
  • PROCESSING: Actively processing documents
  • LISTENING: Ready and waiting for queries

For a complete list of pipeline states, see Understanding Pipeline Status.

Step 5: Query Your Pipeline

Once the pipeline is ready, your connected LLM can use it to retrieve relevant context and respond to questions about your content:

import vectorize_client as v

# Create pipelines API client
pipelines_api = v.PipelinesApi(apiClient)

try:
# Query the pipeline
response = pipelines_api.retrieve_documents(
organization_id,
pipeline_id,
v.RetrieveDocumentsRequest(
question="How to call the API?",
num_results=5
)
)

# Display results
print(f"Found {len(response.documents)} relevant documents:\n")
for i, doc in enumerate(response.documents, 1):
print(f"Result {i}:")
print(f" Content: {doc.text[:200]}...") # Use 'text' instead of 'content'
print(f" Relevance Score: {doc.relevancy}") # Use 'relevancy' instead of 'score'
print(f" Document ID: {doc.id}")
print()

except Exception as e:
print(f"❌ Error querying pipeline: {e}")
raise

How Your Pipeline + LLM Process Queries

When you submit a query, the pipeline and LLM:

  1. Interprets Query: The connected LLM interprets your request using the retrieved context
  2. Retrieves Context: Finds relevant information across your documents
  3. Combines Context: The LLM synthesizes retrieved information from multiple sources
  4. Generates Insight: Provides answers that go beyond simple retrieval

With sufficient retrieved context, your connected LLM can:

  • Answer "why" and "how" questions that require reasoning
  • Identify patterns and relationships in your data
  • Provide recommendations based on your content
  • Synthesize insights from disparate sources

Try these types of questions to see retrieval + reasoning in action:

  • "What are the implications of...?"
  • "How do these concepts relate to each other?"
  • "What should we prioritize based on...?"

Understanding Your Results

The query response includes:

  • Answer: The LLM-generated response to your question
  • Sources: Which document chunks were used
  • Relevance score: Based on embedding similarity, indicates how closely the retrieved content matched your query
  • Metadata: Additional information about the sources

Step 6: Build Your Custom Chatbot

Now that your pipeline is working, let's create a chatbot that connects your pipeline to an LLM for interactive Q&A.

Download a Custom Chatbot Application

Vectorize can generate a complete chatbot application that showcases your pipeline's capabilities:

  1. Navigate to your pipeline in the Vectorize platform
  2. Go to the AI Integrations tab
  3. Click on Chatbot
  4. Select your preferred LLM provider (e.g., OpenAI) and model (e.g., gpt-4o)
  5. Click Download Chatbot ZIP

The downloaded application includes:

  • Pre-configured connection to your pipeline
  • Your organization ID and endpoints already set up
  • Choice of LLM for responses
  • Clean, customizable Next.js interface

Note: This application uses your selected LLM provider’s API — you’ll need a valid API key for that provider, and usage may incur costs.

Running Your Chatbot

After downloading:

  1. Unzip the file and navigate to the project folder
  2. Configure your environment variables in .env.development:
    OPENAI_API_KEY=sk-...
    VECTORIZE_TOKEN=your-vectorize-token
  3. Install and run:
    npm install
    npm run dev
  4. Open http://localhost:3000 to interact with your chatbot!

You now have a fully functional chatbot that can query your documents via your pipeline and use your connected LLM to generate grounded answers.

What's Next?

Congratulations! You've built your first agent-ready pipeline with Vectorize.

Here are some next steps to enhance your pipeline's capabilities:

Was this page helpful?