I studied rock formations before I studied organizations. Geology teaches you to read layers — what came first, what pressure shaped what, where the fault lines run. You learn that the visible surface rarely tells the full story.
After that came consulting, data strategy, co-founding a company, project management, agile transformation. Different industries, different roles, but the same recurring question: why can organizations say what they want but never write it down so anyone can act on it?
I returned to ORAYLIS in 2025, this time advising the COO and Board on how the organization itself should work. Working on the organization itself gave me a new vocabulary. Then AI gave me a new lens. When I watched AI agents fail at organizational tasks, I recognized the pattern I'd been seeing for fifteen years: the problem was never the tool. It was the missing intent underneath: the thing that, in code, you'd call a spec.
This essay is about that gap.
In February 2024, a Canadian tribunal ruled that Air Canada was liable for what its chatbot had promised a grieving customer.1 The airline's defense: the chatbot was a separate entity. The tribunal's response: no, it speaks for you.
The Agent That Spoke for the Company
Jake Moffatt needed a last-minute flight after a family death. Air Canada's chatbot told him he could book now and apply for the bereavement fare within 90 days. He did. The airline denied his claim. The policy didn't work that way. The chatbot had invented a promise the organization never made.
The agent had access to Air Canada's policies. It had the right context. What it lacked was an understanding of what it was authorized to promise, and what the consequences of a wrong promise would be. The chatbot didn't malfunction. It operated without boundaries.
Months later, Klarna demonstrated the same problem at scale.2 Their AI agent handled 2.3 million customer conversations, cut resolution time from eleven minutes to two, and projected $40 million in annual savings. Then the complaints arrived. Generic answers. Robotic tone. Customers felt processed, not helped. CEO Sebastian Siemiatkowski admitted what went wrong: optimizing for cost reduction led to lower quality. The company reversed course and started hiring human agents again.
Two companies. Two well-built agents. Both excellent — at the wrong objective.
Three Disciplines
Nate B Jones calls this the shift from prompts to intent3, and the framing is useful. Three disciplines shape how AI becomes useful. They build on each other, but most organizations only know the first.
Prompt Engineering is about asking the right question. Individual, synchronous, immediate. Better prompt, better answer. This is where most people start and stop.
Context Engineering is about what the agent needs to know. RAG pipelines, MCP servers, knowledge bases, tool access — the architecture that puts the right information in front of the model at the right time. This is where serious AI teams spend most of their effort today.
Intent Engineering is about what the agent should want. Not what it knows, but what it values. Goals, trade-offs, boundaries, priorities — the things that guide autonomous behavior. Context gives an agent knowledge. Intent gives it judgment.
Air Canada's chatbot had context — it could read the bereavement policy. What it lacked was intent: the boundary that says "you may inform, but you may not promise." Klarna's agent had context too: customer data, order histories, policies. What it lacked was the understanding that customer trust matters more than resolution speed, that brand perception is worth protecting even at the cost of short-term metrics.
The Intent Gap
This isn't a technology problem. It's an alignment problem.
Organizations encode their intent in PDFs, PowerPoint decks, and the heads of experienced employees. Who decides what goes unwritten. What takes priority, people know intuitively. Escalation paths are learned through osmosis, months or years of being inside the system.
That worked when humans were the only decision-makers. Humans are slow, but they handle ambiguity well. They read context clues, ask clarifying questions, exercise judgment built on years of pattern recognition.
Autonomous agents don't have years. Some run for weeks without human oversight. An agent deciding who gets a loan, which ticket to escalate, what to promise a customer — that agent needs explicit alignment before it makes its first decision. Without it, every optimization is a gamble.