Cursor SDK (released April 29, 2026 in public beta) lets developers run Cursor's coding agents from their own code, scripts, CI/CD pipelines, or products. Each agent inherits the same harness that powers the Cursor IDE — semantic codebase indexing, MCP server integration, automatic skill discovery from `.cursor/skills/`, lifecycle hooks via `.cursor/hooks.json`, and named subagents for delegated work. Agents run on sandboxed cloud VMs identical to those backing Cursor's Cloud Agents product. Pricing is token-based at the API level rather than seat-based — Composer 2 Standard runs $0.50/M input tokens and $2.50/M output tokens — making CI/CD parallelization economical without per-agent seat costs. Launch customers Faire, Rippling, Notion, and C3 AI are deploying it primarily for ticket-to-PR automation: agents pick up Linear or Jira tickets, generate implementations, write tests, and open draft PRs for human review.
Cursor SDK turns the Cursor coding harness into a programmable runtime. Where Cursor itself is an IDE, the SDK is the same engine without the editor surface — exposed as a TypeScript API that you wire into CI/CD pipelines, internal automation platforms, or customer-facing products. The strategic shift is that Cursor is no longer just a developer tool; it’s a runtime substrate that engineering teams deploy on.
The launch context matters. The Codex /goal primitive collapsed the agent-harness category into a CLI feature in late April. Two days later, Cursor responded by exposing its harness as a paid SDK. The signal: when the loop becomes commodity, the surface that monetizes is the runtime — sandboxed compute, codebase indexing, skill discovery, subagent orchestration — not the loop itself.
Programmatic agent creation: npm install @cursor/sdk, instantiate an agent, give it a goal, and it runs against your codebase with the same context engine that powers Cursor’s IDE features. No need to recreate codebase indexing, semantic search, or grep tooling.
MCP-first tool integration: Agents pick up MCP servers configured for the project automatically. The same MCP definitions that power your IDE workflow are available to programmatic agents — no duplicate config.
Skills and hooks: .cursor/skills/ (markdown skill files following the SKILL.md spec) and .cursor/hooks.json (lifecycle observers and gates) are loaded automatically. Skills written for Cursor SDK are largely portable to Claude Code via the cross-runtime skills format.
Subagents: Named subagents accept custom prompts and per-task model selection. Useful for splitting a large task across specialist agents — one for refactoring, one for test generation, one for documentation.
Sandboxed cloud VMs: Agents run on the same Cloud Agents infrastructure backing Cursor’s hosted offerings. Each agent gets a siloed environment with controlled file, tool, and code access.
Token-based billing. Composer 2 Standard (the default model) runs $0.50 per million input tokens and $2.50 per million output tokens. There is no per-seat surcharge — you only pay for the tokens agents actually consume. This is the structural advantage Cursor is pressing against seat-priced incumbents: a thousand parallel CI agents do not require a thousand Cursor seats.
Skills (SKILL.md files) port cleanly to Claude Code and other harnesses that follow the open spec. Subagent and hook configurations are Cursor-specific — moving off Cursor SDK requires replacing .cursor/hooks.json with the equivalent in your target harness. The exit ramp via cc-switch and the broader cross-runtime tooling layer is real but costs ~1–2 weeks of adapter work per major flow.
Cursor SDK is the right substrate when your distribution graph is engineers and your deployment shape is asynchronous CI/CD work. It is not the right substrate when your value lives on the consumer side (use the OpenAI Apps SDK) or when your agent has to operate inside an authenticated browser session (compose with Browserbase Skills).
Skills and capabilities that work with Cursor SDK.
Open-source AI pair programming tool that works in your terminal to edit code across your entire repository.
AWS's AI-powered coding assistant that helps developers build, deploy, and optimize applications on AWS with code generation and transformation.
Open-source AI coding harness builder that makes AI coding workflows deterministic and repeatable via YAML-defined DAG workflows.