Skip to content

Solving AI Limitations — Practical Approaches Available Today

Organizing what we can actually do now against AI's four fundamental limitations.

Positioning of This Page

01-vision (WHY — Why unshakeable references are needed)
02-reference-sources (WHAT — What qualifies as a reference source)
03-architecture (HOW — How to structure the system)
04-ai-design-patterns (WHICH — Which pattern to choose and when)
This page (REALITY — How far can we address each constraint?)

This page treats AI constraints not as "defects" but as "properties", and draws the boundary between what is solvable and what is not.

Meta Information
What this chapter establishesDistinction between knowledge constraints and institutional constraints, verification loop structure, bounded autonomy principle
What this chapter does NOT cover"Complete solutions" to constraints. This chapter draws boundaries — it does not promise universal fixes
Dependencies01-vision (definition of the 4 constraints), 04-ai-design-patterns (pattern inventory)
Common misuseExpecting "MCP adoption resolves all constraints." Institutional constraints (authority, accountability) cannot be solved by technology alone

About This Document

In 01-vision and 02-reference-sources, we defined the problem of AI's four fundamental limitations (accuracy, currency, authority, and accountability). In 04-ai-design-patterns, we organized design patterns such as RAG and MCP.

This document organizes practical solution approaches available today to address these limitations. While perfect solutions don't exist, by understanding the nature of each constraint and combining appropriate approaches, we can reduce risks to a practical level.

What This Document Does Not Solve

This document does not promise the following. Stated explicitly to avoid misunderstanding.

What this document showsWhat this document does NOT guaranteeWhat it provides instead
Boundary of solvability for each constraintThat all AI constraints can be resolved by technologyRealistic coping strategies per constraint
Mitigation approaches via MCP/RAG etc.Complete elimination of hallucinationsVerification workflows + explicit uncertainty
Technical foundation for accountabilityResolution of legal/ethical responsibility issuesDesign guidelines for audit trails and traceability

Four Constraints and Solution Feasibility Overview

First, let's understand the full picture. The four constraints have very different possibilities for solutions depending on their nature.

ConstraintSolvabilityReason
Currency◎ Largely SolvableCan be addressed through web search and real-time access via MCP
Accuracy△ Mitigable, Not Completely PreventableDue to the fundamental probabilistic nature of LLM generation, complete elimination is impossible in principle
Authority△ Mitigable, Not Completely SolvableAI output is ultimately "one interpretation" and cannot become official authority
Accountability✗ Not Solvable by Technology AloneThis is a matter of legal and ethical institutional design; technology only provides the foundation

Constraint Categories and Coping Strategies

The four constraints fall into two broad categories.

CategoryConstraintsNatureCoping Strategy
Knowledge ConstraintsCurrency, AccuracyArising from LLM's training and generation processExternal knowledge injection + verification workflows
Institutional ConstraintsAuthority, AccountabilityArising from the social positioning of informationTechnical foundation + human judgment

Knowledge constraints can be mitigated by technology, but institutional constraints cannot be solved by technology alone. This distinction is the starting point for design decisions.

Architecture Three-Layer Model and Four Constraints Mapping

How the three-layer model (Agent / Skills / MCP) from 03-architecture contributes to each constraint.

The MCP Layer contributes partially to all constraints. For currency, accuracy, and authority in particular, it provides a direct solution through structured external access.

Verification Loop — Separating Probabilistic Reasoning from Deterministic Verification

The core design for mitigating AI constraints is separating probabilistic reasoning (LLM generation) from deterministic verification (tool-based validation).

Bounded Autonomy

The agents in this project do not aim to "make autonomous judgments on everything." Agent autonomy is bounded to what can be verified. For decisions that cannot be verified, the design escalates to human judgment (see Responsibility Shift Model).

Solving Currency — The Constraint with the Clearest Solutions

1.1 The Nature of the Problem

LLM knowledge is fixed at the point of training data (details: 02-reference-sources 1.2.2). However, for this constraint, there is a clear solution: connecting to external information sources.

1.2 Solution Approaches

The most convenient approach. AI assistants like Claude can retrieve real-time information using built-in web search functionality.

AdvantagesConstraints
No additional costSearch result reliability not guaranteed
Available immediatelyInformation structure is insufficient
Covers a wide range of sourcesHigh noise

Real-time Access via MCP

Directly accessing authoritative information sources in a structured manner. This is the approach this project promotes.

rfcxml-mcp   → Get latest RFCs directly from IETF
hourei-mcp   → Get latest laws from e-Gov API
w3c-mcp      → Get latest specs from W3C/WHATWG
AdvantagesConstraints
Get directly from authoritative sourcesMCP server development and maintenance required
Structured dataMCP needed for each target domain
Verifiable sourcesCannot cover all information sources

Regular RAG Index Updates

For organizational documents, ensuring currency is possible by regularly updating RAG indexes.

1.3 Implementation Evaluation

Currency is the constraint with the clearest solution among the four, and can be practically solved by combining appropriate tools. The remaining challenge is "the practical difficulty of covering all information sources in real-time," which is manageable as an operational issue.

Mitigating Accuracy — Complete Elimination is Impossible in Principle

2.1 The Nature of the Problem

Hallucination (generating information that contradicts facts) stems from the fundamental nature that LLMs generate probabilistically "plausible" rather than "correct" output (details: 02-reference-sources 1.2.1).

This is not a "bug" but a "feature," and it is impossible to eliminate it completely in principle.

2.2 Mitigation Approaches

External Knowledge Injection (MCP / RAG)

Reduce the frequency of hallucinations by providing AI with accurate information sources.

MCP Effect:
  Question: "What does status code 1006 in RFC 6455 mean?"

  Without MCP → AI generates "plausible" answer from training data (may be wrong)
  With MCP → rfcxml-mcp retrieves original text → accurate answer possible

  → Hallucination rate decreases dramatically

However, hallucination may still occur in the interpretation of data retrieved by MCP.

Output Verification (Multi-stage Checks)

A workflow to verify AI output through alternative means.

The validate_statement() of rfcxml-mcp in this project is precisely designed for this purpose.

Explicit Uncertainty

A mechanism for AI to make explicit when it cannot be confident in its answer.

Verification Status Levels:
  ✅ Verified: Information verified through MCP/original source
  ⚠️ Estimated: Based on training data but not verified from source
  ❓ Uncertain: Information not found or contradictory

Human Review

Establish a human review process as final quality assurance. Especially critical for important judgments (legal decisions, security requirements, etc.).

2.3 Implementation Evaluation

Complete accuracy assurance is impossible in principle, but by combining the above approaches, we can significantly reduce practical risks. The key is building a culture that "doesn't simply trust AI output" and creating verifiable workflows.

Ensuring Authority — Risk Management Rather Than Complete Solution

3.1 The Nature of the Problem

AI output is "one interpretation" and not an official view (details: 02-reference-sources 1.2.3). Only the creators (IETF, legislative bodies, etc.) can provide the "official interpretation" of RFCs, laws, and standard specifications.

This constraint arises from the nature of information authority itself rather than AI characteristics, making it impossible to completely solve with technology alone.

3.2 Response Approaches

Direct Access to Original Sources

Access primary information sources directly through MCP and present the original text rather than AI interpretation.

Rather than authorize AI output,
present the original source itself so humans can decide.

Example:
  AI: "The original text of RFC 6455 Section 7.4.1 is as follows:"
      ↓ Present original text
  Human: "I see, this case should be interpreted this way"
      ↓ Human decides

Explicit Citation

Explicitly indicate section numbers of information sources in all AI output. This allows humans to independently verify.

For details on output templates, see 02-reference-sources Chapter 4.

Human Review Workflow

For judgments where authority is required (legal decisions, spec interpretation, etc.), build a workflow that treats AI output as draft/reference information and has humans make the final decision.

AI's Role: Information collection, organization, candidate presentation
Human's Role: Final decision and approval

→ AI is "an excellent research assistant" not "an omniscient answerer"

Multi-angle Verification

Verify information from different angles using multiple MCPs.

3.3 Implementation Evaluation

Rather than "completely solving" authority, the mindset of managing it as a risk is important. AI improves efficiency of access to authoritative information sources but cannot replace the authority itself.

Ensuring Accountability — The Intersection of Technology and Institutional Design

4.1 The Nature of the Problem

AI output has no clear subject of accountability (details: 02-reference-sources 1.2.4). This is more a legal and ethical issue than a technical one.

4.2 Foundation Technology Can Provide

While technology alone cannot completely solve accountability, it can provide the foundation supporting it.

Ensuring Audit Trail

Recording what data AI references, how it processes it, and how it generates output.

Information to Record:
  - When: Timestamp
  - What: Referenced data sources (MCP call logs)
  - How: Process steps (prompts, tool call chains)
  - What Output: Final output

Traceability

The ability to trace back from AI output to the original sources referenced. Structured source information provided by MCP makes this possible.

AI Output: "Status code 1006 must not be included in Close frame"
    ↓ Trace
Reference: rfcxml-mcp → get_requirements(6455, section="7.4.1")
    ↓ Trace
Original: RFC 6455, Section 7.4.1, MUST NOT requirement
    ↓ Verify
Humans can verify the original source directly

Here is where transparency of open-source MCP has value. If MCP server source code is publicly available, we can verify "how data is obtained and processed" at the code level. Proprietary pipelines make this verification impossible, breaking the chain of traceability.

Data Integrity Assurance

A mechanism ensuring that data retrieved by MCP from external services has not been tampered with. The following technologies will be necessary in the future.

TechnologyPurposeStandard
TimestampProve time of data existenceRFC 3161
HTTP Message SignatureEnsure communication authenticityRFC 9421
Digital SignatureProve data non-tamperingDigital Signature Law / eIDAS

Currently, such mechanisms are not built into the MCP protocol, but since MCP handles connections to external services, ensuring integrity of connection paths and retrieved data is an unavoidable future challenge.

Building Governance Framework

Beyond technology scope, organizations need to systematically establish responsibility for AI output.

Who: Who approves AI output finally
Based on What: By what guidelines to decide
How to Record: How to maintain approval process evidence
When Issues Arise: Who responds and how

4.3 Implementation Evaluation

Accountability is an area of institutional design beyond technology. Technology (audit trail, traceability, data integrity) provides its foundation, but ultimately requires legal frameworks, organizational structures, and industry standard improvements.

What individuals and small teams can do now is establish technological foundations as much as possible and prepare for future institutional design. This project's MCP approach (open-source, explicit sourcing, structured access) is precisely building this foundation.

Overall Picture of Four Approaches

5.1 Summary Table

ConstraintSolvabilityPrimary MeansThis Project's Response
Currency◎ Largely SolvableWeb search, MCP, RAG updatesReal-time retrieval via rfcxml-mcp etc.
Accuracy△ MitigableMCP/RAG injection, verification, uncertainty declarationvalidate_statement(), explicit sourcing
Authority△ Risk ManagementOriginal source access, explicit citation, human reviewAccess to primary sources via MCP
Accountability✗ Institutional Design RequiredAudit trail, traceability, signaturesTransparency through open-source MCP

5.2 Mapping Approaches and Solutions

5.3 Key Understanding

We cannot solve all AI constraints with technology alone. These approaches do not completely eliminate the constraints. The premise is selecting strategies appropriate to each constraint's nature.

ConstraintStrategyStance
CurrencyLargely solvable with technologySolve
AccuracyMitigate with technology, accept residual riskMitigate + Accept
AuthorityManage risk with technology, humans make final decisionsRisk Management
AccountabilityTechnology provides foundation, defer to institutional designFoundation Building

The important thing is not aiming for "complete solution" but understanding each constraint's nature and selecting appropriate approaches.

This project's MCP approach contributes partially to all four constraints. Particularly, improving accuracy, currency, and authority through "unshakeable reference sources," and building the foundation for accountability through open-source transparency, is the best approach practically achievable today.

Released under the MIT License.