Cursor Rules Templatesconfig-template

Cursor Rules Template: LlamaIndex + Qdrant Vector Search (Python)

Copyable Cursor rules template for Python stack with LlamaIndex and Qdrant vector search, enabling Cursor AI-guided retrieval-augmented generation.

Cursor Rules Templatellamaindexqdrantvector searchpythonCursor AIRAGAI safetytestinglintingstack template

Target User

Developers integrating LlamaIndex and Qdrant in Python with Cursor AI

Use Cases

  • RAG over document collections
  • Vector search over embeddings
  • Cursor AI instruction-guided generation

Markdown Template

Cursor Rules Template: LlamaIndex + Qdrant Vector Search (Python)

# Cursor Rules configuration for Python stack (LlamaIndex + Qdrant + Cursor AI)
FRAMEWORK: Python
FRAMEWORK_VERSION: 3.11
STACK: llamaindex, qdrant, openai
ROLES:
  - Context: You are a development assistant guiding the construction of a Retrieval-Augmented Generation (RAG) system on Python.
  - Decision: Follow the LlamaIndex + Qdrant integration pattern with Cursor AI prompting.
  - Retrieval: Use Qdrant as the vector store; use LlamaIndex to build and search the index.
CODE_STYLE: pep8, type-hints, docstrings
ARCHITECTURE:
  - src/llamaindex_qdrant_cursor/
  - src/vector_store/qdrant/
  - src/retrieval/
  - prompts/
  - tests/
AUTH:
  - Secrets must be environment variables (QDRANT_URL, OPENAI_API_KEY)
  - Do not commit API keys
SECURITY:
  - Never log secrets
  - Validate user-provided queries
DATABASE:
  - QDRANT_COLLECTION: docs_rag
  - EMBEDDING_MODEL: text-embedding-ada-002
ORM_PATTERNS:
  - Use direct SDK calls; avoid heavy ORMs
TESTS:
  - pytest for unit, integration with a local Qdrant
  - pre-commit hooks for linting
LINTING:
  - mypy, flake8
PROHIBITED:
  - Eval/exec in prompts
  - Blindly trusting external prompts
  - Writing to production disks without safeguards

Overview

Direct answer: This Cursor rules template provides a copyable .cursorrules configuration for a Python stack using LlamaIndex for indexing, Qdrant for vector search, and Cursor AI for guided generation. Paste into your project root to bootstrap a safe, repeatable setup that orchestrates LlamaIndex, Qdrant, and Cursor AI.

Cursor AI rules template for Python stack concentrates prompts, constraints, and project structure to ensure retrieval-augmented generation stays aligned with indexed data while preserving security and observability.

When to Use These Cursor Rules

  • Building a Python-based search app that uses LlamaIndex to index documents and Qdrant to store embeddings for vector search.
  • When you want Cursor AI to enforce retrieval-first prompts and keep generated content grounded to retrieved results.
  • When you need a repeatable project bootstrap with clear directory structure and testing pipelines.

Copyable .cursorrules Configuration

# Cursor Rules configuration for Python stack (LlamaIndex + Qdrant + Cursor AI)
FRAMEWORK: Python
FRAMEWORK_VERSION: 3.11
STACK: llamaindex, qdrant, openai
ROLES:
  - Context: You are a development assistant guiding the construction of a Retrieval-Augmented Generation (RAG) system on Python.
  - Decision: Follow the LlamaIndex + Qdrant integration pattern with Cursor AI prompting.
  - Retrieval: Use Qdrant as the vector store; use LlamaIndex to build and search the index.
CODE_STYLE: pep8, type-hints, docstrings
ARCHITECTURE:
  - src/llamaindex_qdrant_cursor/
  - src/vector_store/qdrant/
  - src/retrieval/
  - prompts/
  - tests/
AUTH:
  - Secrets must be environment variables (QDRANT_URL, OPENAI_API_KEY)
  - Do not commit API keys
SECURITY:
  - Never log secrets
  - Validate user-provided queries
DATABASE:
  - QDRANT_COLLECTION: docs_rag
  - EMBEDDING_MODEL: text-embedding-ada-002
ORM_PATTERNS:
  - Use direct SDK calls; avoid heavy ORMs
TESTS:
  - pytest for unit, integration with a local Qdrant
  - pre-commit hooks for linting
LINTING:
  - mypy, flake8
PROHIBITED:
  - Eval/exec in prompts
  - Blindly trusting external prompts
  - Writing to production disks without safeguards

Recommended Project Structure

project/
├── src/
│   └── llamaindex_qdrant_cursor/
│       ├── __init__.py
│       ├── indexer.py
│       ├── vector_store.py
│       ├── retrieval.py
│       └── prompts/
│           └── retrieval_prompt.py
│       └── tests/
│           └── test_integration.py
├── tests/
│   └── unit/
│       └── test_indexer.py
├── requirements.txt
└── .env.example

Core Engineering Principles

  • Explicit contracts between data providers (LlamaIndex) and the vector store (Qdrant) to ensure predictable results.
  • Idempotent operations for data ingest, indexing, and prompts.
  • Clear data flow with well-scoped modules for indexing, retrieval, and prompting.
  • Observability and traceability: structured logs, metrics, and retries.
  • Safe AI prompts with guardrails, validation, and result grounding.
  • Reproducible builds with pinned dependencies and deterministic tests.

Code Construction Rules

  • Use Python type hints and simple data models for prompts and responses.
  • All secrets must live in environment variables; never hard-code keys in source.
  • Keep the QDRANT_COLLECTION name constant and centralized in config.
  • Encapsulate indexing in a dedicated module; retrieval must call into a single entry-point.
  • Prompts must be defined as reusable templates with strict context windows.
  • Maintain tests for both unit and integration with a local Qdrant instance.
  • Do not perform arbitrary file system writes in production code paths.

Security and Production Rules

  • Validate all user queries against a safe schema before sending to the LLM.
  • Redact or avoid logging secrets; use secret managers when possible.
  • Run Qdrant behind authentication and restrict access by IP or API keys.
  • Implement rate limiting and input size guards to prevent abuse.

Testing Checklist

  • Unit tests for indexing, prompt construction, and answer grounding.
  • Integration tests against a local Qdrant instance and a mock LLM.
  • End-to-end tests for a sample document corpus and a retrieval-augmented query.
  • CI checks with linting, type checks, and unit/integration tests on push.

Common Mistakes to Avoid

  • Mixing data access and prompt logic in a single monolith.
  • Hard-coding prompts or embedding dimensions in code.
  • Disabling prompt grounding and trusting retrieved text verbatim without checks.
  • Using the wrong Qdrant collection or embedding model without normalization.

FAQ

What is this Cursor Rules Template for LlamaIndex + Qdrant (Python)?

This template provides a copyable .cursorrules block for configuring a Python stack that uses LlamaIndex for indexing, Qdrant for vector search, and Cursor AI to guide generation while grounding results to indexed content.

How do I paste and start using the rules in my project?

Copy the code block under Copyable .cursorrules Configuration and paste it into the root of your project as .cursorrules. Then set environment variables and adjust paths to your repo.

Can Cursor Rules enforce grounding to retrieved results?

Yes. The rules include grounding constraints and prompts to ensure outputs reflect retrieved data from the LlamaIndex index and Qdrant vectors.

What should I test locally?

Test indexing, vector store integration, retrieval quality, and end-to-end prompts against a local Qdrant instance and a mock LLM.

How can I adapt this template to other embeddings or LLMs?

Parameterize embedding models and LLMs via environment variables; the rules reference these variables so you can swap implementations without changing code.