The Intent Gap

On the distance between what organizations intend and what their systems can act on.

Hand-drawn sketch of a city street breaking open to reveal geological layers beneath — the visible surface hiding deeper structure
AI-generated illustration
Flux by Black Forest Labs

I studied rock formations before I studied organizations. Geology teaches you to read layers — what came first, what pressure shaped what, where the fault lines run. You learn that the visible surface rarely tells the full story.

After that came consulting, data strategy, co-founding a company, project management, agile transformation. Different industries, different roles, but the same recurring question: why can organizations say what they want but never write it down so anyone can act on it?

I returned to ORAYLIS in 2025, this time advising the COO and Board on how the organization itself should work. Working on the organization itself gave me a new vocabulary. Then AI gave me a new lens. When I watched AI agents fail at organizational tasks, I recognized the pattern I'd been seeing for fifteen years: the problem was never the tool. It was the missing intent underneath: the thing that, in code, you'd call a spec.

This essay is about that gap.


In February 2024, a Canadian tribunal ruled that Air Canada was liable for what its chatbot had promised a grieving customer.1 The airline's defense: the chatbot was a separate entity. The tribunal's response: no, it speaks for you.

The Agent That Spoke for the Company

Jake Moffatt needed a last-minute flight after a family death. Air Canada's chatbot told him he could book now and apply for the bereavement fare within 90 days. He did. The airline denied his claim. The policy didn't work that way. The chatbot had invented a promise the organization never made.

The agent had access to Air Canada's policies. It had the right context. What it lacked was an understanding of what it was authorized to promise, and what the consequences of a wrong promise would be. The chatbot didn't malfunction. It operated without boundaries.

Months later, Klarna demonstrated the same problem at scale.2 Their AI agent handled 2.3 million customer conversations, cut resolution time from eleven minutes to two, and projected $40 million in annual savings. Then the complaints arrived. Generic answers. Robotic tone. Customers felt processed, not helped. CEO Sebastian Siemiatkowski admitted what went wrong: optimizing for cost reduction led to lower quality. The company reversed course and started hiring human agents again.

Two companies. Two well-built agents. Both excellent — at the wrong objective.

Three Disciplines

Nate B Jones calls this the shift from prompts to intent3, and the framing is useful. Three disciplines shape how AI becomes useful. They build on each other, but most organizations only know the first.

Prompt Engineering is about asking the right question. Individual, synchronous, immediate. Better prompt, better answer. This is where most people start and stop.

Context Engineering is about what the agent needs to know. RAG pipelines, MCP servers, knowledge bases, tool access — the architecture that puts the right information in front of the model at the right time. This is where serious AI teams spend most of their effort today.

Intent Engineering is about what the agent should want. Not what it knows, but what it values. Goals, trade-offs, boundaries, priorities — the things that guide autonomous behavior. Context gives an agent knowledge. Intent gives it judgment.

Air Canada's chatbot had context — it could read the bereavement policy. What it lacked was intent: the boundary that says "you may inform, but you may not promise." Klarna's agent had context too: customer data, order histories, policies. What it lacked was the understanding that customer trust matters more than resolution speed, that brand perception is worth protecting even at the cost of short-term metrics.

The Intent Gap

This isn't a technology problem. It's an alignment problem.

Organizations encode their intent in PDFs, PowerPoint decks, and the heads of experienced employees. Who decides what goes unwritten. What takes priority, people know intuitively. Escalation paths are learned through osmosis, months or years of being inside the system.

That worked when humans were the only decision-makers. Humans are slow, but they handle ambiguity well. They read context clues, ask clarifying questions, exercise judgment built on years of pattern recognition.

Autonomous agents don't have years. Some run for weeks without human oversight. An agent deciding who gets a loan, which ticket to escalate, what to promise a customer — that agent needs explicit alignment before it makes its first decision. Without it, every optimization is a gamble.

An agent without intent alignment is like a new hire who knows the org chart but has never been told what the company actually cares about. Now multiply that by ten thousand.

Making Intent Legible

What if organizational purpose were machine-readable?

Not a mission statement in a system prompt. Not prose in a company wiki. Structured definitions: who decides what, under which conditions, with what authority, within what boundaries, optimizing for what.

This is the idea behind Org as Code: define organizational structure, governance, and decision logic in version-controlled files. Roles, teams, committees, escalation paths, decision frameworks, all in YAML that humans can read and machines can parse.

At the heart of it are OPIs — Organizational Programming Interfaces. Where APIs define how software components communicate, OPIs define how organizational units interact. Each unit declares its mandate, its interfaces, its governance logic, its decision boundaries.

A committee, for example, becomes more than a calendar invite:

committee:
  name: Steering Committee
  type: governance
  cadence: bi-weekly
  governance:
    framework: DACI
    quorum: 3
  decisions:
    - type: budget
      authority: approve
      condition: "amount > 10k EUR"
      escalation: board
    - type: strategic
      authority: recommend
      escalation: board
  interfaces:
    receives_from:
      - source: delivery-leads
        artifact: budget-requests
    sends_to:
      - target: board
        artifact: recommendations

It's an alignment layer. When an agent needs to make a budget decision, it doesn't guess. It reads the spec. When a decision exceeds its authority, it knows where to escalate. When trade-offs arise, the boundaries are explicit.

At least, that's the direction. Can you fully codify organizational intent? Probably not. Organizations are messy, political, alive. You won't replace human judgment with configuration files. But you can make enough of the boundaries explicit that an agent knows when to act and when to stop and ask. I don't have all the answers here. But the question feels right.

Air Canada's chatbot could have checked: "Am I authorized to make refund promises?" Klarna's agent could have weighed: "Does resolving this faster serve the customer, or just the metric?" With explicit intent, these questions at least have a foundation — not replacing judgment, but giving it guardrails.

Define.
Version.
Evolve.

Every organization already has intent. It's just locked in places machines can't reach.

When a new employee joins, they absorb culture by osmosis. They sit in meetings, watch how decisions get made, learn who to ask for what. It takes months. Often years before someone understands how the organization works beneath its org chart.

An agent has no months. No hallway conversations, no lunch with a mentor, no gut feeling built over years. It needs explicit alignment before day one. And that alignment needs to evolve — versioned, reviewed, tested against reality — just like code.

The intent gap will define which organizations succeed with AI. The ones that write down what they want, in a format both humans and machines can act on, will be the ones whose agents actually work.

The question isn't whether to close the gap. It's who writes the spec and encodes the intent.

That's what I'm working on. Not because configuration files will save the world, but because organizations deserve the same clarity we give our code. Org as Code is my attempt to close that gap: open source, version-controlled, evolving. If you're thinking about these things too, I'd like to hear from you.
/top ← andreassiemes.de