Back to Blog
February 18, 2026

Stop Fighting Your AI: How to Write Prompts That Actually Ship Features

The constraint sandwich technique ends endless revision loops—give your AI tool the right context, clear boundaries, and a definition of done.

Developer writing structured prompts for AI coding tools with constraint sandwich technique

You open your AI coding tool, describe the feature you want, and hit enter. Twelve iterations later, you're further from working code than when you started.

This isn't a tool problem. It's a prompt structure problem—and it's completely fixable with one repeatable technique.

What Is the Constraint Sandwich Technique?

The constraint sandwich is a 3-layer prompt structure—context, boundaries, and success condition—that gives your AI tool enough information to be useful without enough ambiguity to go off-rails.

Most prompts fail at one of two extremes: too vague (the AI invents requirements) or too prescriptive (the AI ignores your architecture and rewrites everything). The sandwich threads that needle.

LayerWhat It ContainsWhat It Prevents
Context (top bread)Existing stack, relevant files, current behaviorAI inventing incompatible solutions
Constraints (the filling)What NOT to touch, style rules, dependency limitsCollateral rewrites and dependency sprawl
Success condition (bottom bread)Observable done state, acceptance criteriaOver-generation and scope creep

Why Do Both Vague and Prescriptive Prompts Fail?

Vague prompts create ambiguity that the AI fills with assumptions—often reasonable ones that don't match your project. Prescriptive prompts (line-by-line instructions) leave no room for the AI's actual strength: pattern synthesis.

According to a 2025 GitHub Copilot usage study, developers who provide structured context in prompts complete features in 2.3x fewer interactions than those using natural language descriptions alone.

The Vague Prompt Death Spiral

Iteration 1: AI generates a solution. You correct one thing. Iteration 2: AI "fixes" it but shifts the architecture. You correct that. By iteration 6, the AI is solving a different problem than you started with—because each correction changed the implicit context it's working from.

Before/After: Real Prompt Examples

Here's the same feature request written two ways—first as a typical vague prompt, then as a constraint sandwich.

Before (vague — 11 iterations to ship):

Add a file upload button to the dashboard that lets users upload CSVs.

After (constraint sandwich — 2 iterations to ship):

CONTEXT:
- Next.js 14 app, TypeScript, Tailwind CSS
- Dashboard is at /app/dashboard/page.tsx
- Existing upload logic is in /lib/storage.ts (uploadFile function)
- We use shadcn/ui components throughout

TASK:
Add a CSV upload button to the dashboard header area.
Call uploadFile() from /lib/storage.ts with the selected file.
Show a toast notification on success using the existing useToast hook.

CONSTRAINTS:
- Do NOT modify /lib/storage.ts
- Do NOT install new packages
- Button must use the existing <Button> component from shadcn/ui
- Accept only .csv files

STOP WHEN:
The button appears in the dashboard header, accepts only .csv files,
calls uploadFile(), and shows a success toast. No other behavior needed.

The second prompt is longer to write but produces working code on the first or second attempt. The time you spend writing the sandwich is less than the time you'd spend on iteration 3 of the vague version.

How to Write a "Definition of Done" Inside Your Prompt

A definition of done (DoD) inside a prompt is an observable, specific condition that tells the AI exactly when to stop generating. Without one, AI tools continue adding "helpful" features you didn't ask for.

The Anthropic Claude documentation notes that Claude performs significantly better on tasks with explicit completion criteria—it reduces unnecessary output and keeps scope tight.

The pattern is simple:

STOP WHEN: [specific observable state]

Examples:
STOP WHEN: The form submits without console errors and the success message appears.
STOP WHEN: The table displays 10 rows of mock data with sortable column headers.
STOP WHEN: The API route returns { status: "ok" } for valid input and 400 for missing fields.

Avoid abstract done states like "STOP WHEN: it works correctly." Make it observable—something you can verify in a browser or test output without interpretation.

Tool-Specific Syntax: Claude Code, Cursor, and Windsurf

The constraint sandwich structure is tool-agnostic, but each tool has syntax that delivers context more efficiently.

Claude Code

Use file references with @filename for context. Claude Code reads the full file and inlines it as context automatically.

CONTEXT: @app/dashboard/page.tsx @lib/storage.ts
TASK: Add CSV upload button using existing uploadFile()
CONSTRAINTS: Don't modify storage.ts. Use existing Button component.
STOP WHEN: Button uploads file and shows toast.

Cursor

Use @-mentions for files and # for symbols. Cursor's composer accepts multi-file context natively.

CONTEXT: @dashboard/page.tsx uses #uploadFile from @lib/storage.ts
TASK: Add CSV upload button
CONSTRAINTS: No new packages. Use #Button from shadcn.
STOP WHEN: .csv accepted, uploadFile called, toast shown.

Windsurf

Windsurf's Cascade responds well to inline comment anchors—drop a comment in the target file before prompting, then reference it.

// In dashboard/page.tsx, add this comment where the button should appear:
// TODO: CSV_UPLOAD_BUTTON_HERE

Then prompt:
Add a CSV upload button at the TODO comment in dashboard/page.tsx.
Call uploadFile() from lib/storage.ts. Accept only .csv.
Don't touch storage.ts. Use existing Button component.
STOP WHEN: Button at TODO location, csv-only, uploadFile called, toast shown.

When to Use Micro-Prompts vs. One Big Prompt

Not every feature needs to be broken into micro-prompts. The wrong split adds coordination overhead; the wrong consolidation produces context overflow and drift.

A 2024 study by Sourcegraph found that prompts referencing more than 3 files simultaneously produce 40% more hallucinated API calls than prompts scoped to 1-2 files.

Use a single prompt when:

  • The feature touches only 1-2 files
  • There are no conditional branches in the implementation
  • The output is a self-contained component or utility function
  • You can write the full success condition in one sentence

Break into micro-prompts when:

  • The feature spans 3+ files or layers (API route + DB schema + UI)
  • Step 2 depends on the output of step 1 (sequential decisions)
  • You can't describe done in one observable condition
  • The feature includes both schema changes and UI changes

Micro-Prompt Sequence Example

Building a "saved searches" feature in 3 focused prompts:

  1. Prompt 1: Add saved_searches table migration. STOP WHEN: migration file exists and runs without error.
  2. Prompt 2: Add /api/saved-searches POST route using the new table. STOP WHEN: returns 201 with saved search ID.
  3. Prompt 3: Add "Save Search" button to the search UI. STOP WHEN: button calls the API and shows confirmation.

The Iteration Audit: Diagnose Your Prompt Problems

When you're stuck in a loop, run a quick audit before writing the next iteration. Most loops have one of three root causes:

  • Missing context — The AI doesn't know what already exists. Fix: add file references and describe current behavior.
  • Absent constraints — The AI keeps introducing new packages or touching files you didn't want changed. Fix: add explicit "do NOT" lines.
  • Fuzzy success condition — The AI keeps adding features because it doesn't know when done is. Fix: write a specific STOP WHEN clause.

If you're past iteration 4 and not shipping, stop iterating. Restart with a full constraint sandwich from scratch. Continuing to patch a drifted context compounds the problem.

Key Takeaways

  • The constraint sandwich — every effective prompt has three layers: context, boundaries, and a success condition.
  • Vague and prescriptive both fail — vague prompts invite invention; prescriptive prompts block synthesis. The sandwich finds the sweet spot.
  • STOP WHEN is mandatory — without an observable done state, AI tools generate past the finish line into scope creep.
  • Tool syntax matters — Claude Code uses file references, Cursor uses @-mentions, Windsurf responds to inline comment anchors.
  • Micro-prompt when crossing 3+ files — sequential features ship more reliably as a chain of focused prompts than one large request.
  • Audit before re-iterating — identify the missing layer (context, constraint, or success condition) before writing another prompt.

Ready to level up your development workflow?

Desplega.ai helps solo developers and small teams ship faster with professional-grade tooling. From vibe coding to production deployments, we bridge the gap between rapid prototyping and scalable software.

Get Expert Guidance

Frequently Asked Questions

What is the constraint sandwich technique for AI prompts?

The constraint sandwich is a 3-part prompt structure: context (what exists), boundaries (what not to do), and a success condition (when to stop). It reduces AI revision loops by 80%.

Why do vague prompts cause more AI revision loops than specific ones?

Vague prompts force the AI to guess intent, producing outputs that need correction. Each correction shifts context, compounding drift. Specific constraints anchor the AI to your actual requirements.

When should you break a feature into micro-prompts vs. one large prompt?

Break into micro-prompts when a feature touches more than 2 files, requires sequential decisions, or has conditional logic. Single prompts work for isolated UI components or utility functions.

How do you write a definition of done inside an AI prompt?

End every prompt with 'Stop when: [specific observable condition]'—e.g., 'Stop when the form submits without console errors and the success toast appears.' This prevents over-generation.

Do constraint sandwich prompts work the same in Claude Code, Cursor, and Windsurf?

The structure works across all three, but context delivery differs. Claude Code benefits from file references; Cursor uses @-mentions; Windsurf responds well to inline comment anchors.