Everyone is talking about context engineering.
How to structure your prompts. How to pack the right documents into the context window. How to wire up RAG pipelines, tool schemas, memory systems, and system instructions so the model performs reliably.
This matters. Context quality directly affects AI output quality. No argument there.
But most teams don't have a context problem first. They have an intent problem.
They are optimizing how they brief the machine before they have figured out what they are asking it to do. Better context just helps the AI fail more efficiently.
Two Disciplines, One Stack
Context engineering is the practice of structuring prompts, retrieval, memory, tools, and system instructions so an AI can perform a task reliably. It answers: how do we package information so the model interprets it correctly?
Intent engineering is the practice of defining the objective, outcomes, constraints, edge cases, and verification criteria so a team — or an agent — solves the right problem. It answers: what exactly are we asking the system to accomplish, and how will we know it worked?
Context engineering improves transmission. Intent engineering defines direction.
One reduces misunderstanding. The other reduces misdirection.
These are not competing ideas. They are layers in a stack. Intent is upstream. Context is downstream. You define what success looks like, then you engineer how to communicate that to the system.
The order matters. Reverse it, and you get teams with pristine RAG pipelines delivering features nobody asked for.
The Decision Problem
Context engineering became popular because AI exposed a communication problem. Models are powerful but literal. They need the right information in the right format at the right time. Engineers who figured this out got dramatically better outputs.
Intent engineering matters more because AI exposed a decision problem.
When a human developer receives a vague ticket, they compensate. They walk over to the PM's desk, check Slack, read between the lines, make reasonable assumptions. AI agents don't compensate. They execute. Give an agent an ambiguous objective and it will confidently build exactly the wrong thing — with excellent code quality.
The bottleneck isn't how the model receives information. It's whether the team has decided — clearly, structurally, verifiably — what problem they are solving, what success looks like, and what must not break.
No amount of retrieval-augmented generation fixes a team that hasn't made those decisions.
Why Developers Should Care
If you're an engineer working with AI coding agents, you've felt this pain even if you haven't named it:
- Vague tickets that generate technically correct but product-wrong code
- "Improve the checkout flow" tasks where the agent optimizes the wrong thing
- Happy-path implementations that miss constraints, edge cases, and integration points
- Pull requests that pass all tests but trigger support tickets within 48 hours
These often look like context failures — missing architecture docs, omitted constraints in retrieval, incomplete system prompts. But the deeper failure is usually intent. The instruction itself was incomplete. No amount of better packaging fixes a specification that never defined success.
A stronger context window cannot rescue a weak product brief.
When you have a structured intent spec — objective, success criteria, constraints, edge cases, verification steps — the agent doesn't just code better. It decides better. It knows why the feature exists, what counts as done, and where the boundaries are. That's fundamentally different from having more documents in the context window.
Why Product Managers Should Care More
Product management is being redefined. Not in the vague "PMs need to learn AI" sense that's been recycled in every newsletter this year. Something more structural is happening.
PMs used to write artifacts optimized for human interpretation. PRDs, one-pagers, Notion docs — their purpose was to create shared understanding through narrative. The document was a starting point for conversation. Ambiguity was acceptable because humans would resolve it through discussion, whiteboarding, and iteration.
AI agents don't do any of that.
An agent reads the spec, interprets it literally, and executes. There is no hallway conversation. There is no "I assumed you meant..." follow-up. There is no body language cue when the PM describes the feature and the engineer's face says that won't work.
The difference is visible in the artifacts themselves:
PRD-style: "Improve onboarding completion rates."
Intent-style: "Reduce drop-off at address entry by 30% without increasing fraud review failures. Completion is measured at the identity-verified state, not form submission."
The first is a direction. The second is a specification an agent — or a team — can execute against and verify. It defines the metric, the constraint, and the boundary.
This changes the core PM artifact from an explanatory document to an executable specification.
The job shifts from describing features to specifying success conditions. From telling the team what to build to defining — precisely, structurally — what it means for the build to be correct.
That's intent engineering. And it is now a core PM skill, not a developer tool.
This isn't better specs with a new label. Traditional specs describe what to build. Intent specs define what success looks like — the outcomes, the constraints, the edge cases, and the verification criteria — while deliberately leaving the implementation open. The difference is the difference between a blueprint and a mission brief.
The Verification Gap
This is the sharpest distinction between the two disciplines.
Context engineering alone does not define product correctness. You can add evals, test harnesses, and structured output validation — and those help — but they verify that the model followed instructions well. They don't verify that the instructions were worth following. Without upstream intent, you're testing execution quality, not product quality.
Intent engineering bakes verification into the process. Because you defined outcomes, constraints, and edge cases upfront, you have something to check against. Did the feature meet the success criteria? Did it respect the constraints? Did it handle the specified edge cases?
Context engineering improves the transmission of a task. Intent engineering defines the correctness of the result.
Without intent, "did the AI do the right thing?" has no structured answer. You're left eyeballing outputs and hoping. That works for a solo developer on a weekend project. It does not work for a product team shipping to customers.
The Trap
Here's the failure mode to watch for.
A team adopts better context engineering. Prompts are cleaner, retrieval is sharper, tool use is well-structured. Outputs improve immediately. Everything looks better. Everyone feels more productive.
But the team never clarified what problem they were solving, who it was for, or how they'd know it worked.
They are now building the wrong thing faster, with more confidence, and with better-formatted code. The AI didn't fail. The specification was never written.
This is the danger of optimizing the downstream layer first. Better execution creates false confidence. The outputs look polished, so the team assumes the direction is right. They skip the hard work of defining intent because the easy work of improving context produces visible results immediately.
The Ordering Principle
For product teams working with AI, the stack looks like this:
First, engineer intent. Define the problem, the outcomes, the constraints, the edge cases, and the verification criteria. Get alignment on what success looks like before a single line of code is generated.
Then, engineer context. Structure how that intent reaches the AI — through system prompts, retrieved documents, tool schemas, memory, or whatever your stack requires.
Then execute. Let the agent build.
Then verify against intent. Not against whether the output "looks good." Against the success criteria you defined in step one.
Context engineering is not overhyped. It's essential infrastructure. But it is downstream of intent. It is the implementation layer, not the decision layer.
You're not wrong to care about context. You're just starting one layer too low.
The Shift
In the AI era, the scarce skill is no longer producing software. It is specifying what software should accomplish — clearly enough that autonomous systems can build it without guesswork.
That specification discipline has a name. It's intent engineering.
Context engineering will remain important — the same way database optimization and API design remain important. Essential, skilled work that every team needs.
But the higher-order discipline — the one that determines whether all that well-engineered context points in the right direction — is intent.
Product teams that get this ordering right will ship faster and ship correctly. Teams that skip intent and jump straight to context will ship fast and wonder why nothing they build seems to land.
The operational shift is straightforward: product reviews should start with outcomes and constraints, not feature descriptions. Agent workflows should be evaluated against defined success criteria, not output quality. And the question "is this done?" should have a structured answer before the first line of code is written.
Intent first. Context second. That's the stack.
Don't Just Write Code. Define Intent.
Turn user friction into structured Intent Specs that drive your AI agents.
Get Started for Free