SingleStore Quickstart
Approximate time to complete: 5-10 minutes, excluding prerequisites
This quickstart will walk you through creating a pipeline that prepares your data for AI agents. You'll set up a pipeline that transforms content from the Vectorize documentation into structured, searchable context in SingleStore - giving agents the foundation they need to reason over your data, not just retrieve it.
Before you begin
Before starting, ensure you have access to the credentials, connection parameters, and API keys as appropriate for the following:
- A Vectorize account (Create one free here ↗ )
- An OpenAI API Key (How to article)
- A SingleStore account (Create one on SingleStore ↗ )
Step 1: Create a SingleStore Deployment
SingleStore offers two types of workspaces. Starter Workspaces are best for small-scale or experimental projects, while Standard Workspaces are designed for applications that need higher resources, scalability, and support for production environments.
When you create your SingleStore account, a Starter Workspace and a database will be created and deployed for you. You can use these for this quickstart, or can create a Standard Workpace to work with instead.
If you're using a Starter Workspace:
-
The Starter Workspace and database are automatically created when you create your SingleStore account.
-
Select the workspace, then click Access.
-
Save the username, then click the 3 dots and select Reset Password. Copy and securely save the password.
-
Go to the Overview tab, then click Connect and select Your App.
-
Save the host and the port from the connection string. You'll use these when you create your data pipeline in Vectorize.
If you're using a Standard Workspace:
-
Navigate to the SingleStore Cloud Portal and click + New Deployment.
-
Name your Workspace Group, select your cloud provider and region, and click Next.
-
Name the Workspace, optionally adjust the size and settings, then click Create Workspace.
-
If you're on SingleStore's free trial, a database containing MarTech data will be automatically added to the workspace you just created. You can ignore this for the purpose of the Vectorize quickstart. If you'd like to remove it, click on the 3 dots, then click Drop Database.
-
Click + Create Database.
-
Name your database, make sure it's attaching to the correct workspace, then click + Create Database.
-
Go to your workspace, then click Access.
-
Copy and save the username. Click Reset Password to set the password, then securely save the password.
-
Go to Workspaces, click the 3 dots, then click Connect Directly.
-
Save the host and the port from the connection string. You'll use these when you create your data pipeline in Vectorize.
Step 2: Create a data pipeline on Vectorize
Create a New Data Pipeline
-
Open the Vectorize Application Console ↗
-
From the dashboard, click on
+ New RAG Pipeline
under the "RAG Pipelines" section. -
Enter a name for your pipeline. For example, you can name it
quickstart-pipeline
. -
Click on
+ New Vector DB
to create a new vector database. -
Select Singlestore from the list of vector databases.
-
In the Singlestore configuration screen, enter the parameters in the form using the SingleStore Parameters table below as a guide, then click Create SingleStore Integration.
SingleStore Parameters
Field Description Required Name A descriptive name to identify the integration within Vectorize. Yes Host The host URL from your workspace's connection string. Yes Port The port from your workspace's connection string. Yes Database The name of your database. Yes Username The username to access your database. Yes Password The password you'll use to access this database. Yes
Configure AI Platform
-
Click on
+ New AI Platform
. -
Select OpenAI from the AI platform options.
-
In the OpenAI configuration screen:
- Enter a descriptive name for your OpenAI integration.
- Enter your OpenAI API Key.
-
Leave the default values for embedding model, chunk size, and chunk overlap for the quickstart.
Add Source Connectors
- Click on Add Source Connector.
- Choose the type of source connector you'd like to use. In this example, select Web Crawler.
Configure Web Crawler Integration
- Name your web crawler source connector, e.g., vectorize-docs.
- Set Seed URL(s) to
https://docs.vectorize.io
.
- Click Create Web Crawler Integration to proceed.
Configure Web Crawler Pipeline
- Accept all the default values for the web crawler pipeline configuration:
- Throttle Wait Between Requests: 500 ms
- Maximum Error Count: 5
- Maximum URLs: 1000
- Maximum Depth: 50
- Reindex Interval: 3600 seconds
- Click Save Configuration.
Verify Source Connector and Schedule Pipeline
- Verify that your web crawler connector is visible under Source Connectors.
- Click Next: Schedule RAG Pipeline to continue.
Schedule Data Pipeline
- Accept the default schedule configuration
- Click Create RAG Pipeline.
Step 3: Monitor and Test Your Pipeline
Monitor Pipeline Creation and Backfilling
- The system will now create, deploy, and backfill the pipeline.
- You can monitor the status changes from Creating Pipeline to Deploying Pipeline and Starting Backfilling Process.
- Once the initial population is complete, the data pipeline will begin crawling the Vectorize docs and writing vectors to your SingleStore index.
View Data Pipeline Status
- Once the website crawling is complete, your data pipeline will switch to the Listening state, where it will stay until more updates are available.
Step 4: Test Your Pipeline in the RAG Sandbox
Access the RAG Sandbox
- From the main pipeline overview, click on the RAG Pipelines menu item to view your active pipelines.
- Find your pipeline in the list of pipelines.
- Click on the magnifying glass icon under the RAG Sandbox column to open the sandbox for your selected pipeline.
Query Your Data
- In the sandbox, you can ask questions about the data you've ingested.
- Type a question related to your dataset in the Question field. For example, "What is Vectorize?" since you're working with the Vectorize documentation.
- Click Submit to send the question.
Review Results
- After submitting your question, the sandbox will retrieve relevant chunks from your vector database and display them in the Retrieved Context section.
- The response from the language model (LLM) will be displayed in the LLM Response section.
- The Retrieved Context section shows the chunks that were matched with your question.
- The LLM Response section provides the final output based on the retrieved chunks.
- You can continue to ask different questions or refine your queries to explore your dataset further.
- The sandbox allows for dynamic interactions with the data stored in your vector database.
That's it! You've successfully created a data pipeline that transforms your content into structured context, ready for AI agents to reason over and make intelligent decisions.