Skip to content

Doctrine Layer — Give AI Constraints and Objectives, Not Instructions

Defining what AI should uphold and achieve, rather than dictating how it should act step by step.

About This Document

This document addresses two open issues simultaneously:

The connecting insight is a single design principle:

Don't tell AI "do it this way."
Tell AI "satisfy these conditions, within these resources, at this reliability level."

This shifts AI input from imperative instructions to declarative intent — constraints and objectives that remain valid regardless of how the AI chooses to implement them.

Audience: Engineers building AI agent systems who want to move beyond step-by-step prompting toward principled, constraint-based AI governance. Also useful for team leads establishing shared guidelines across multiple agents.

Position of This Page

01-vision (WHY — Why we need authoritative references)
02-reference-sources (WHAT — What to use as references)
03-architecture (HOW — How to structure the system)
04-ai-design-patterns (WHICH — Which pattern to choose and when)
05-solving-ai-limitations (REALITY — How to face real-world constraints)
06-physical-ai (EXTENSION — Extending the three-layer model to the physical world)
This page (DOCTRINE — On what basis should AI judge and act?)

Meta Information
What this chapter establishesDoctrine's three components (Objectives, Constraints, Judgment Criteria), autonomy levels, Agent execution loop
What this chapter does NOT coverDetailed Evaluation/Evals design (future extension candidate), specific agent implementation
Dependencies03-architecture (the three layers that doctrine governs), 06-physical-ai (doctrine application in the physical world)
Common misuseViewing doctrine as a "strict rulebook." Doctrine is both a constraint and a "definition of posture" that enables autonomous judgment

Position in Document Series

DocumentCentral Question
01-visionWhy does AI need guidance?
02-reference-sourcesWhat should AI know?
03-architectureWhere do components live?
04-design-patternsWhich pattern to use?
05-solving-limitationsHow to mitigate AI constraints?
07-doctrine-and-intentOn what basis should AI judge and act?

The Missing Layer

The existing three-layer architecture (03-architecture) defines Agent, Skills, and MCP layers — covering what AI knows and what AI can do. But it leaves a critical gap: on what basis does AI judge and decide?

Current Three-Layer Model

The existing architecture covers what AI knows and what AI can do:

What's Missing: The Basis for Judgment

Mapping to the OODA loop (as identified in #28):

OODA PhaseRoleCurrent Coverage
ObserveGather context, sensor dataMCP layer (✅ covered)
OrientApply judgment criteria, policiesNot explicitly defined
DecidePrioritize, resolve trade-offsNot explicitly defined
ActExecute via toolsMCP layer (✅ covered)

The Orient and Decide phases — where judgment criteria and decision-making principles live — have no dedicated home in the current architecture. This is where the Doctrine Layer belongs.

Four-Layer Model with Doctrine

Theoretical Foundation: Correspondence with the BDI Model

The Doctrine Layer's design structurally corresponds to the BDI (Belief-Desire-Intention) Model, a classical framework in agent research.

BDI ElementMeaningCorrespondence in This Architecture
BeliefWhat the agent knows about the worldSkills layer domain knowledge + MCP layer context
DesireWhat the agent wants to achieveDoctrine's Objectives
IntentionThe action plan the agent has adoptedAgent layer's task planning (selected within doctrine constraints)

In the BDI model, Desire defines "what to achieve" and Intention defines "how to act toward it." The Doctrine Layer explicitly declares Desire and surrounds it with constraints and judgment criteria, thereby ensuring the quality of Intentions.

Intent = Goal/Objective

In this document, "Intent" is synonymous with "Goal" and "Objective" in AI research literature. It defines the direction — "where to head" — without specifying the concrete steps to get there.

Core Principle: Constraints and Objectives Over Instructions

With the four-layer model established, we can now articulate the core design principle that drives the Doctrine Layer: give AI constraints and objectives, not step-by-step instructions. This principle mirrors the broader trajectory of software abstraction.

The Abstraction History of Software

Every generation of software development has raised the level of abstraction for what humans provide:

EraWhat Humans ProvideWhat Machines Handle
AssemblyRegister operationsInstruction encoding
CLogic descriptionMemory management
PythonIntent in codeType handling, GC
AI (current)Intent in natural languageCode generation
AI (next)Constraints + ObjectivesImplementation decisions

The key insight: as abstraction rises, the human input shifts from "how" to "what" and ultimately to "why" and "within what bounds."

Instructions vs. Intent

ApproachExampleProblem
Imperative (instructions)"Search for 'digital signature' in the spec, then extract Section 12.8, then list all shall requirements"Brittle — breaks if spec structure changes
Declarative (intent)"Verify that our implementation satisfies all normative requirements for digital signatures in PDF 2.0"Resilient — AI chooses the right tools and path

The declarative approach requires the AI to have:

  1. Objectives — what success looks like
  2. Constraints — what must be respected
  3. Judgment criteria — how to evaluate trade-offs

These three elements form the Doctrine.

Why This Matters for Development

Whether AI-driven or human-driven, the essential development process remains the same:

This cycle does not change whether code is written by hand, generated by AI, or produced by future compilation technologies. What changes is the abstraction level — but "Did you define requirements? Did you design? Did you test? Did you verify?" remains universal.

The Doctrine Layer formalizes the governance of this process for AI agents.

Doctrine Layer Structure

The Doctrine Layer draws its structural inspiration from military doctrine — a well-established framework for enabling autonomous decision-making under uncertainty. The key insight is that military organizations have long solved the problem we now face with AI agents: how do you ensure consistent judgment when direct communication is impossible?

Military Doctrine Mapping

As analyzed in #30, the military doctrine hierarchy maps directly to AI agent configuration:

Military ConceptRoleClaude Code LocationAI Architecture Layer
DoctrineShared principles for allCLAUDE.md (root)Doctrine Layer
ROEAllowed / prohibited actions.claude/rules/Doctrine Layer
SOPStandardized procedures.claude/skills/Skills Layer
Operations OrderSpecific task directives.claude/commands/Agent Layer
Unit OrganizationSpecialized capabilities.claude/agents/Agent Layer

Doctrine in the Physical World: Foundation for Distributed Consensus

The importance of doctrine is most pronounced in the context of Physical AI. In multi-agent robotics, communication blackouts can make direct coordination between agents impossible. In such cases, shared doctrine functions as the foundation for distributed consensus.

Communication available: Commander (cloud) → Unit (edge) "Avoid Shelf 3"
Communication lost:      Each unit decides independently based on shared doctrine
                         → "Collision avoidance is top priority" "Stop for unknown obstacles"

This is the very archetype of military doctrine — the scenario where doctrine elevates from "constraints" to "a mechanism that guarantees consistent behavior even without communication." In the physical world, where decision errors can produce irreversible consequences, doctrine also functions as a safeguard.

Three Components of Doctrine

1. Objectives — What Success Looks Like

Define the "why" — not the "how":

markdown
## Objectives

- All public API endpoints MUST comply with RFC 7231 (HTTP Semantics)
- Translation output MUST achieve xCOMET score ≥ 0.85
- Code coverage MUST NOT fall below 80%

2. Constraints — What Must Be Respected

Define boundaries that no agent may cross:

markdown
## Constraints

- MUST NOT commit code without passing tests
- MUST NOT call external APIs without security review
- MUST verify against authoritative specification before claiming compliance
- MUST request human approval for destructive operations

3. Judgment Criteria — How to Evaluate Trade-offs

Define how to prioritize when objectives conflict:

markdown
## Judgment Criteria

- Security > Performance > Convenience
- Specification compliance > Implementation speed
- When uncertain, ask the user rather than assume
- Prefer structured access over similarity search for specification documents

The last criterion — "prefer structured access over similarity search for specification documents" — is an example of how architectural decisions (like choosing MCP over RAG for ISO specifications) become doctrine.

Autonomy Levels

A critical function of the Doctrine Layer is defining how much freedom each agent has. Not all agents need the same level of autonomy — a formatting agent requires far less oversight than a production deployment agent. The Doctrine Layer should define where each agent sits on this spectrum:

LevelWhen to UseExample
Level 1High-risk operations (production deployment)Database migration agent
Level 2Moderate risk (code changes)Code review agent
Level 3Low risk, high volume (translations)Translation workflow agent
Level 4Routine tasks with clear constraintsFormatting, linting agent

The doctrine defines the default autonomy level and the conditions for escalation.

Integration with Existing Architecture

The Doctrine Layer does not replace the existing three layers — it governs them. Each existing layer continues to operate as before, but now with explicit principles that guide their behavior. The following diagram shows how the three components of doctrine (Objectives, Constraints, Judgment Criteria) flow into each layer.

How Doctrine Feeds Each Layer

Note how specification MCPs feed the Doctrine Layer: pdf-spec-mcp's get_requirements extracts normative requirements (shall/must/may) that become constraints in the doctrine. This is the architectural reason why structured access to specifications matters more than RAG-based similarity search — doctrine needs precise, authoritative constraints, not "similar-sounding passages."

Agent Execution Loop and Doctrine

The following shows how doctrine operates within an agent's execution cycle.

PhaseRoleRelationship with Doctrine
IntentDefine the objective to achieveSupplied by doctrine's "Objectives"
DoctrineApply constraints and judgment criteriaThis is the Doctrine Layer's direct point of action
ReasoningAnalyze the situation and evaluate optionsResolve trade-offs based on judgment criteria
PlanningFormulate an action planSelect optimal steps within constraint boundaries
ActionExecute via toolsGuarantee no constraint violations
EvaluationAssess results against objectives and constraintsVerify objective achievement and constraint compliance
Dynamic Constraint Injection

Normative requirements (shall/must/may) extracted from specification MCPs function not as static doctrine but as dynamic constraints injected into doctrine at runtime. For example, requirements returned by pdf-spec-mcp's get_requirements dynamically expand the doctrine's constraint set during translation, implementation, and verification phases.

This means doctrine is not limited to "pre-written fixed rules" — it becomes a living constraint system that automatically evolves as specifications are updated.

Connection to Development Phases

Each development phase (development-phases) maps to doctrine elements:

Development PhaseDoctrine ElementMCP Support
Requirements DefinitionObjectives + Constraintsrfcxml-mcp, w3c-mcp, hourei-mcp
DesignJudgment CriteriaSkills (design patterns)
ImplementationConstraints (coding standards)rxjs-mcp, linting tools
TestingObjectives (coverage targets)xcomet-mcp, test frameworks
VerificationAll three elementspdf-spec-mcp (get_requirements)

Practical Example: Doctrine for a Translation Workflow

To make the concept concrete, here is a complete doctrine definition for a translation workflow. This example shows how the three doctrine components (Objectives, Constraints, Judgment Criteria) plus autonomy level work together to enable an agent to operate independently while maintaining quality and consistency.

markdown
# Translation Workflow Doctrine

## Objectives

- Produce Japanese translations of technical specifications
  that maintain terminology consistency and technical accuracy
- Achieve xCOMET quality score ≥ 0.85

## Constraints

- MUST use registered glossary for domain terminology
- MUST NOT translate PDF specification keywords (shall, object, stream)
  differently across sections
- MUST preserve section numbering and cross-references
- Translation of a single specification SHOULD complete within one session

## Judgment Criteria

- Terminology consistency > Natural-sounding Japanese
- When a term has multiple valid translations, prefer the glossary entry
- If xCOMET score < 0.80, re-translate the segment before proceeding
- If quality cannot be improved after 2 attempts, flag for human review

## Autonomy Level

- Level 3 (Supervised Autonomy): Agent translates and evaluates,
  human reviews final output

This doctrine enables the translation agent to make decisions without explicit instructions for every situation — exactly what military doctrine was designed to achieve: "when communication is cut, every unit still makes the same judgment."

What to Give AI: A Summary

The following diagram distills the entire Doctrine Layer philosophy into a single visual: what you should provide to AI agents versus what you should avoid providing. This serves as a quick reference for teams adopting the doctrine-based approach.

The Doctrine Layer is where this principle becomes operational — encoded into CLAUDE.md, .claude/rules/, and the governance structure that surrounds all agent activity.

Core Messages

PrincipleSummary
Give constraints, not instructionsShift AI input from imperative procedures to declarative intent
Doctrine has three componentsObjectives (what to achieve) + Constraints (what to respect) + Judgment Criteria (how to prioritize)
Autonomy is a spectrumExplicitly define Level 1 (fully supervised) through Level 4 (full autonomy)
Foundation for distributed consensusGuarantees consistent judgment even during communication blackouts (archetype of military doctrine)
Dynamic Constraint InjectionNormative requirements from specification MCPs expand doctrine at runtime

Released under the MIT License.