Cursor Rules TemplatesCursor Rules Template

Cursor Rules Template: Ollama Local LLM + LangGraph Integration

Cursor Rules Template for Ollama Local LLM with LangGraph integration, featuring a copyable .cursorrules block and stack-specific guidance for Cursor AI.

.cursorrules templatecursor-rules-templateollamalanggraphcursor ailocal llmlocal modelstack-specificCursor Rules

Target User

Developers building local LLM apps with Ollama and LangGraph via Cursor AI

Use Cases

  • Local LLM prototyping with Ollama
  • LangGraph graph-based prompts
  • Cursor AI workflow orchestration with local models

Markdown Template

Cursor Rules Template: Ollama Local LLM + LangGraph Integration

Overview

This Cursor rules template documents a local Ollama LLM integration with LangGraph for Cursor AI. It targets a locally hosted language model workflow with graph-based prompts, routing, and safety guardrails. The configuration provides a copyable .cursorrules block and stack-specific guidance to help developers implement and validate a secure local LLM pipeline.

When to Use These Cursor Rules

- Prototype local LLM apps with Ollama and LangGraph without cloud dependencies
- Define graph-driven prompt orchestration and routing within Cursor AI
- Enforce sandboxed, offline-first execution with reproducible results
- Document architecture and testing practices for local model deployments
Copyable .cursorrules Configuration

Copy the block below into a new file at the project root named .cursorrules

# Cursor Rules Template for Ollama LangGraph integrationcursorrules:0.1# Framework Role & Contextframework: Ollama-LangGraphstack: Ollama Local LLM + LangGraphrole: Framework Role & Contextcontext: You are a Cursor AI agent configured to assist with building local LLM apps using Ollama and LangGraph graphscodeStyle: Follow standard Python and TypeScript code style guidelines; align with project lint rulesarchitecture: root/llm/ollama ; root/langgraph ; prompts ; testsauthentication: Local only; no external keys; use environment variables for any sensitive datasecurity: No network calls; local runtime only; limit file system access to project rootdatabase: None required for prompts; if persistence is needed use a local sqlite with restricted permissionstesting: pytest for unit/integration; lint with flake8 or ruff; run pre-commit checksprohibited: Do not call remote APIs; Do not fetch external code; Do not write secrets to disk; Do not spawn shell commands without sanitizationDo not use banned libraries that conflict with Ollama or LangGraphRecommended Project Structure

Stack-specific directory tree for the Ollama LangGraph Cursor Rules workflow

my-ollama-langgraph-app/config/prompts/llm/  ollama/    models/    runtimes/langgraph/  graphs/rules/  cursor/tests/Core Engineering Principles

- Local-first development with explicit boundaries between local models and tooling
- Deterministic prompts and graph-driven routing for reproducible results
- Explicit separation of concerns between LLM invocation, graph logic, and orchestration
- Transparent, versioned prompts and guardrails to reduce drift
- Environment-isolated deployments with pinned model versions
Code Construction Rules

- Use only Ollama for local LLM and LangGraph for graph operations
- Keep prompts in prompts/ with a clear file naming convention
- Place all Cursor rules in a root level .cursorrules file
- Do not embed API keys or tokens in code or prompts
- Validate prompts against a schema and unit-test prompt templates
- Ensure output length and token usage are bounded for offline runtimes
- Respect directory boundaries and avoid writing to protected paths
Security and Production Rules

- Run Ollama in a restricted user context with limited file permissions
- Disable external network access for the runtime; use local resources only
- Pin model versions and LangGraph dependencies to specific, tested revisions
- Store secrets in environment variables, not in code or prompts
- Audit prompts and graphs for safety and bias concerns; implement watchdog constraints
Testing Checklist

- Unit tests for prompt rendering and graph routing
- Integration tests that exercise Ollama invocation and LangGraph graph processing
- End-to-end tests that cover a full Cursor AI workflow with a local model
- Linting and style checks integrated into CI; ensure deterministic outputs
- Validate that no external network calls are made during tests
Common Mistakes to Avoid

- Assuming cloud APIs are available in local mode
- Overlooking prompt guardrails leading to unsafe or leaking data
- Mixing up model versions or graph configurations across environments
- Neglecting to commit the .cursorrules and prompt templates
- Ignoring test coverage for local only workflows
FAQ

What is the purpose of this Cursor Rules Template for Ollama and LangGraph?

This template provides concrete Cursor AI instructions to configure a local Ollama LLM with LangGraph graphs, including a copyable .cursorrules block, stack-specific file layout, and guardrails to keep AI-assisted development safe.

What stack does this template target?

It targets a local Ollama LLM integrated with LangGraph for graph-based prompting and routing, leveraging Cursor AI rules to ensure secure, reproducible deployments.

How do I apply the .cursorrules block?

Copy the code block from the Copyable .cursorrules Configuration section and paste it into a new file at the project root named .cursorrules. Cursor AI will read this block during local execution.

Where should I place the template in my project?

Place the .cursorrules file and supporting scaffolding under your project root as defined by the Recommended Project Structure, ensuring it is committed to version control.

Overview

This Cursor rules template documents a local Ollama LLM integration with LangGraph for Cursor AI. It targets a locally hosted language model workflow with graph-based prompts, routing, and safety guardrails. The configuration provides a copyable .cursorrules block and stack-specific guidance to help developers implement and validate a secure local LLM pipeline.

When to Use These Cursor Rules

  • Prototype local LLM apps with Ollama and LangGraph without cloud dependencies
  • Define graph-driven prompt orchestration and routing within Cursor AI
  • Enforce sandboxed, offline-first execution with reproducible results
  • Document architecture and testing practices for local model deployments

Copyable .cursorrules Configuration

Copy the block below into a new file at the project root named .cursorrules

# Cursor Rules Template for Ollama LangGraph integration
cursorrules:0.1
# Framework Role & Context
framework: Ollama-LangGraph
stack: Ollama Local LLM + LangGraph
role: Framework Role & Context
context: You are a Cursor AI agent configured to assist with building local LLM apps using Ollama and LangGraph graphs
codeStyle: Follow standard Python and TypeScript code style guidelines; align with project lint rules
architecture: root/llm/ollama ; root/langgraph ; prompts ; tests
authentication: Local only; no external keys; use environment variables for any sensitive data
security: No network calls; local runtime only; limit file system access to project root
database: None required for prompts; if persistence is needed use a local sqlite with restricted permissions
testing: pytest for unit/integration; lint with flake8 or ruff; run pre-commit checks
prohibited: Do not call remote APIs; Do not fetch external code; Do not write secrets to disk; Do not spawn shell commands without sanitization
Do not use banned libraries that conflict with Ollama or LangGraph

Recommended Project Structure

Stack-specific directory tree for the Ollama LangGraph Cursor Rules workflow

my-ollama-langgraph-app/
config/
prompts/
llm/
  ollama/
    models/
    runtimes/
langgraph/
  graphs/
rules/
  cursor/
tests/

Core Engineering Principles

  • Local-first development with explicit boundaries between local models and tooling
  • Deterministic prompts and graph-driven routing for reproducible results
  • Explicit separation of concerns between LLM invocation, graph logic, and orchestration
  • Transparent, versioned prompts and guardrails to reduce drift
  • Environment-isolated deployments with pinned model versions

Code Construction Rules

  • Use only Ollama for local LLM and LangGraph for graph operations
  • Keep prompts in prompts/ with a clear file naming convention
  • Place all Cursor rules in a root level .cursorrules file
  • Do not embed API keys or tokens in code or prompts
  • Validate prompts against a schema and unit-test prompt templates
  • Ensure output length and token usage are bounded for offline runtimes
  • Respect directory boundaries and avoid writing to protected paths

Security and Production Rules

  • Run Ollama in a restricted user context with limited file permissions
  • Disable external network access for the runtime; use local resources only
  • Pin model versions and LangGraph dependencies to specific, tested revisions
  • Store secrets in environment variables, not in code or prompts
  • Audit prompts and graphs for safety and bias concerns; implement watchdog constraints

Testing Checklist

  • Unit tests for prompt rendering and graph routing
  • Integration tests that exercise Ollama invocation and LangGraph graph processing
  • End-to-end tests that cover a full Cursor AI workflow with a local model
  • Linting and style checks integrated into CI; ensure deterministic outputs
  • Validate that no external network calls are made during tests

Common Mistakes to Avoid

  • Assuming cloud APIs are available in local mode
  • Overlooking prompt guardrails leading to unsafe or leaking data
  • Mixing up model versions or graph configurations across environments
  • Neglecting to commit the .cursorrules and prompt templates
  • Ignoring test coverage for local only workflows

FAQ

What is the purpose of this Cursor Rules Template for Ollama and LangGraph?

This template provides concrete Cursor AI instructions to configure a local Ollama LLM with LangGraph graphs, including a copyable .cursorrules block, stack-specific file layout, and guardrails to keep AI-assisted development safe.

What stack does this template target?

It targets a local Ollama LLM integrated with LangGraph for graph-based prompting and routing, leveraging Cursor AI rules to ensure secure, reproducible deployments.

How do I apply the .cursorrules block?

Copy the code block from the Copyable .cursorrules Configuration section and paste it into a new file at the project root named .cursorrules. Cursor AI will read this block during local execution.

Where should I place the template in my project?

Place the .cursorrules file and supporting scaffolding under your project root as defined by the Recommended Project Structure, ensuring it is committed to version control.