Intent Engineering is the practice of translating user problems into structured specifications that AI agents can execute without guessing. It sits between research and code — the discipline of defining what gets built and why, precisely enough that a human can review it and a machine can act on it.
It is not prompt engineering with more words. It is not a new name for requirements gathering. It is a fundamentally different activity: structuring the problem, not the instruction.
Why a New Discipline?
Something shifted in software development in 2025. AI coding agents — Claude Code, Cursor, Windsurf, GitHub Copilot — crossed a threshold. They went from autocompleting lines to implementing entire features autonomously. An engineer can describe what they need, point the agent at a codebase, and return to a working pull request with green tests.
This is extraordinary. It also exposed a problem that was always there but never mattered this much: most teams are terrible at specifying what they want.
When a human developer receives a vague ticket, they compensate. They walk to the PM's desk, check Slack history, look at the design file, make reasonable assumptions. AI agents cannot do any of this. Give an agent an ambiguous instruction and it will confidently build exactly the wrong thing.
The bottleneck in software development has moved. It's no longer writing code. It's deciding what code to write — and specifying it clearly enough that an autonomous system can execute it without a hallway conversation.
That specification discipline is intent engineering.
The industry is converging on this. Developer tools are being rebuilt around structured intent — agent orchestration platforms that execute against specs, coding assistants that consume structured context instead of freeform prompts. The word "intent" keeps showing up because everyone has hit the same wall: AI agents are only as good as the specifications they receive.
Intent vs. Requirements vs. User Stories
The distinction becomes concrete when you see the same problem expressed in three different formats.
A requirement tells the agent what to build:
"Build a REST endpoint that validates claim submissions against a JSON schema."
The solution is already decided. The agent becomes a typist. If the chosen design is suboptimal, the agent faithfully implements the suboptimal version. It can't push back because it doesn't know why this endpoint exists.
A user story tells the agent who wants something:
"As a claims adjuster, I want to see missing fields so I can request them from the customer."
Better — there's a user and a goal. But user stories are deliberately thin. They assume a conversation will fill in the gaps. A human developer can ask "what counts as complete in property insurance versus auto insurance?" An AI agent can't have that hallway conversation.
An intent spec tells the agent why it matters and gives it enough context to figure out the rest:
Incomplete claims cause a 2.4-day delay per follow-up cycle, 1.3 rounds of back-and-forth, and 18 minutes of adjuster time per round. A complete claim in auto insurance requires police report number and vehicle registration; in property insurance it requires damage photos and building year. Incoming claims must be checked for completeness before coverage review can begin.
The agent now understands the business pain, the scale of the problem, the domain rules, and the success condition. It has enough context to make intelligent implementation decisions — choosing the right data model, designing the validation logic, even suggesting optimizations the human hadn't considered.
The core insight: AI agents don't need more precise instructions. They need more context and more freedom. Requirements over-constrain the solution. User stories under-specify the problem. Intent hits the sweet spot — precise about the what and why, deliberately open about the how.
| Requirement | User Story | Prompt | Intent Spec | |
|---|---|---|---|---|
| Focus | Solution | User desire | Single AI interaction | Structured problem |
| Context | Technical only | Minimal | Whatever fits the window | Business pain + domain rules |
| Defines success | Implicitly | Acceptance criteria | Not at all | Measurable outcomes |
| Handles edge cases | Rarely | Sometimes | Never | Explicitly |
| Machine-executable | No | No | Partially | Yes |
| Durability | Document that drifts | Ticket that closes | Ephemeral conversation | Versioned artifact |
The Three Schools of Intent Engineering
The term "intent engineering" is emerging simultaneously from different directions. Understanding the different perspectives helps clarify what it actually is — and isn't.
Intent Engineering as a Role
Some organizations are creating dedicated roles around intent. The idea: if code generation is automated, someone needs to own the upstream work of extracting intent from stakeholders, enriching it with domain context, and validating that outcomes match the original business need.
This isn't a rebranded Requirements Engineer or a Product Owner with a new title. The difference shows up the moment you hand traditional artifacts to an AI agent. Requirements over-specify the solution. User stories under-specify the problem. The intent role sits precisely at the interface between business need and technical execution — the exact spot where AI agents need the most help.
Discovery methods in this model include structured interviews, process shadowing, domain mapping, and data-driven analysis of error rates, throughput times, and support patterns. The output is a structured intent document that feeds directly into agentic workflows.
Intent Engineering as a Practice
Others frame intent engineering as a personal discipline — how individual developers and builders can be more intentional when working with AI tools.
The practice: alternate between rapid AI-powered iteration and structured reflection. Instead of diving into vibe coding without guardrails, you clarify your intent first, burst with AI to explore solutions, pause to evaluate, and iterate. It's vibe coding with a feedback loop.
This is valuable but incomplete. A personal practice helps a solo developer ship better code. It doesn't solve the team problem: how does the second developer understand why something was built? How does the PM trace a feature back to the user friction that justified it? How does the next sprint inherit the context from the last one?
Intent Engineering as Agent Configuration
A fourth angle is emerging from the AI product management community: intent engineering as a way to configure autonomous AI agents — customer support bots, workflow agents, research assistants. In this framing, intent defines the agent's ongoing behavior: objectives, outcomes, health metrics, constraints, escalation rules, and stop conditions.
This is adjacent but distinct from specifying what to build. Configuring a support agent's behavior ("resolve Tier-1 issues without frustration, keep CSAT above 4.2") is a runtime problem. Specifying a feature ("38% of claims arrive incomplete, causing 2.4-day delays — build completeness validation") is a build-time problem. Both need structured intent. Both suffer when intent is vague. But the artifacts and workflows are different.
Intent Engineering as a System
The fourth perspective — and the one we believe matters most — is intent engineering as a system: a structured layer in the product development workflow that connects user evidence to shipped software.
In this model, intent engineering isn't something one person does. It's something the team practices. Friction signals flow in from support tickets, analytics, research. They get structured into versioned specs with explicit objectives, outcomes, constraints, and verification criteria. Those specs feed AI agents. The agents' output gets verified against the spec. Results feed back into the evidence that informs the next spec.
This is what we call the Intent Layer — the missing middle between user signal and agentic execution.
The Anatomy of an Intent Spec
An intent spec is not a document. It's a structured artifact — something a human can review and a machine can execute against. Every spec follows five parts:
1. Objective — What user problem are you solving and why does it matter? Not "we think users want X" but "we observed users struggling with Y." Grounded in evidence: a support ticket, a metric, a user quote.
2. Outcomes — Observable, measurable state changes. Not "improve performance" but "p95 response time under 200ms." Every outcome should be verifiable by a test or a metric.
3. Constraints — Hard limits the implementation must respect. Existing APIs it can't break, performance budgets, regulatory requirements, technology boundaries.
4. Edge Cases — Boundary conditions and failure modes. What happens when the network is down? What about empty states? What if the user has 10,000 items instead of 10?
5. Verification — How do you confirm it works? Unit tests, integration tests, manual checks. The agent knows exactly what "done" looks like and can verify its own work.
Some teams add a sixth element: Health Metrics — what must not degrade while pursuing the outcomes. "Reduce cart abandonment" is a good objective, but if the solution tanks page load time or breaks accessibility, you've traded one problem for another. Health metrics make the Goodhart trap explicit: optimize for X, but not at the expense of Y.
Here's a real example:
objective: >
Users abandon onboarding at step 3 (address entry) because the form
requires 6 fields when auto-complete could reduce it to 1.
This costs ~200 signups/week.
outcomes:
- Address entry uses a type-ahead geocoding field
- Drop-off at step 3 decreases by at least 40%
- Users who select a suggested address proceed in under 10 seconds
constraints:
- Must not break existing address validation downstream
- Geocoding API budget: max $200/month at current volume
- GDPR: address suggestions must not be stored before user confirms
edge_cases:
- Geocoding API unavailable → fall back to manual form
- Address outside supported regions → show "not yet available" message
- PO Box addresses → allow manual entry
verification:
- Integration test: geocoding returns results for 5 sample addresses
- Fallback test: manual form renders when API returns 500
- Load test: p95 response time under 300ms
Notice what this spec does not contain: implementation instructions. It doesn't say which geocoding API to use, which framework to build with, or how to structure the code. It defines the problem, the success criteria, and the constraints. The agent — or the developer — figures out the rest.
How to Practice Intent Engineering
Intent engineering is not a one-time documentation exercise. It's a workflow:
1. Start from friction, not features. The input isn't "what should we build?" It's "where does the user's experience break?" Support tickets, drop-off metrics, research transcripts, NPS comments — these are raw material. Features are conclusions. Conclusions without evidence are just vibes.
2. Structure the problem before the solution. Write the objective, outcomes, and edge cases before anyone opens an IDE. This forces clarity. If you can't articulate what success looks like, you're not ready to build.
3. Make context explicit. Everything an agent — or a new team member — would need to understand the decision. Why this problem? Why now? What did we try before? What can't we change? Context that lives in someone's head is context that dies when they go on vacation.
4. Execute with agents. Hand the spec to an AI coding agent. A well-written spec should produce a working implementation on the first pass — or surface specific questions where the spec was ambiguous. If the agent guesses, the spec has a gap.
5. Verify against the spec, not intuition. After shipping, the question isn't "does this feel right?" It's "did the outcomes we specified actually happen?" Did the drop-off decrease? Did the response time improve? Measurable verification closes the loop.
6. Feed results back. What you learn from shipping informs the next round of specs. Intent engineering is a cycle, not a phase.
What Intent Engineering Is Not
It's not prompt engineering. Prompt engineering optimizes a single AI interaction — the art of crafting the right words to get a better response. Intent engineering structures the entire problem upstream of any prompt. A prompt says "write me a login form." An intent spec says "40% of users who forget their password abandon signup — add a reset flow that recovers 60% of them, verified by automated test."
It's not requirements gathering. Requirements describe solutions. Intent describes problems. A requirement says "build a REST endpoint." An intent says "claims arrive incomplete 38% of the time, causing 2.4-day delays." The solution space stays open.
It's not just for AI. Intent specs make human developers faster too. A clear spec with explicit edge cases and verification criteria is useful whether the person reading it is a junior engineer or a language model. The discipline of structured intent improves every workflow it touches.
It's not bureaucracy. Writing an intent spec takes less time than the third round of PR review on code that missed the point. The overhead of precision is lower than the overhead of rework.
Who Needs Intent Engineering?
Intent engineering is most valuable for teams that work with AI coding agents — but it benefits anyone who specifies what software should do:
- Product managers who hand specs to AI agents or engineering teams. The spec becomes the product — not the code.
- Engineering leads who want consistent, reviewable specifications across their team. Intent specs are auditable in a way that verbal handoffs and Slack threads are not.
- Founders and solo builders who use AI to move fast. Intent prevents the vibe coding hangover — the moment your codebase outgrows the context in your head.
- Design teams who need to translate user research into development specs. Intent engineering connects research directly to what gets built.
If you're handing work to an AI agent and the output is unpredictable, the problem is almost certainly in the specification.
Getting Started
You don't need a platform, a methodology overhaul, or a new role. You need one spec.
Pick a real piece of user friction — a support ticket, a drop-off metric, a user complaint. Write a five-part intent spec: objective, outcomes, constraints, edge cases, verification. Hand it to your AI coding agent. Compare the output to what you usually get from a Jira ticket or a prompt.
The difference is the argument.
Don't Just Write Code. Define Intent.
Turn user friction into structured Intent Specs that drive your AI agents.
Get Started for Free