Kendall Framework: Hero
The Kendall Framework

The operating system
for enterprise AI context

Most AI programs stall because the organizational context feeding them is broken. The Kendall Framework is the methodology that fixes it: four phases, structured context types, quality gates, and governance built in from the start.

01
Diagnose
Context 360 workshops, accuracy gap mapping
02
Build capability
Train Context Curators and Controllers internally
03
Operationalize
Context Supply Chain, AI BoM, Context Warehouse
04
Govern and scale
ARPO gates, KCBS compliance, CoE operations
The four phases

What actually happens inside each phase

Each phase has a defined set of inputs, activities, tools, and deliverables. Together they form a repeatable operating loop, not a one-time project.

01
Phase 1

Diagnose

Before anything can be fixed, the specific failure points in your AI context pipeline need to be mapped. Diagnosis is not a generic assessment; it produces a structured, prioritized evidence base that drives every decision that follows.

Context 360 Workshop

A structured facilitated session that maps the full context landscape of a specific AI use case. Participants document the organizational knowledge the AI is expected to use, where it currently comes from, and where the gaps and inconsistencies are. The output is a Context Gap Map: the evidence base for every fix that follows.

ARPO Root Cause Analysis

Every AI accuracy failure traces back to one or more of four root causes: Access (the AI cannot reach the information), Retrieval (the AI retrieves the wrong information), Provenance (the AI cannot verify where the information came from), or Oversight (no human checkpoint exists). The ARPO analysis categorizes each identified gap and produces a prioritized remediation roadmap.

Use case prioritization

Not every AI use case is worth fixing first. Kendall's prioritization scoring evaluates each use case against business impact, context complexity, and remediation effort, producing a ranked pipeline that focuses energy where return is highest.

Phase 1 deliverables
Context Gap Map Visual map of current vs required context for each priority use case
ARPO failure report Categorized breakdown of root causes with severity ratings
Prioritized use case pipeline Ranked list of opportunities with impact and effort scores
02
Phase 2

Build capability

External consultants cannot run your Context Supply Chain. The goal of this phase is to develop certified internal practitioners who own context management as a permanent organizational function, not a project that ends when the engagement does.

Context Curator training (KCCC)

Context Curators are the practitioners who build and maintain Context Blocks day-to-day. They run Context Sprints, apply the Kendall Prompt Format, identify ARPO failure points in live use cases, and contribute to the AI Bill of Materials. KCCC certification is the foundation credential that every context-management function is built on.

Context Controller training (KCC)

Context Controllers govern the context architecture across multiple use cases and business units. They design the Context Warehouse structure, run ARPO quality audits, establish governance policies, and own the organization's AI Bill of Materials at the program level. KCC is the governance credential for CDAOs, AI risk leaders, and program architects.

Context Sprints

Context Sprints are short, structured work cycles (typically one to two weeks) in which a team of Curators produces a defined set of Context Blocks for a specific use case. Sprint methodology applies Agile discipline to context development: scoping, building, review, and validation in a tight loop that produces usable outputs fast.

Phase 2 deliverables
Certified Context Curators KCCC-credentialed practitioners ready to operate independently
First Sprint output Initial Context Block set for the highest-priority use case
Sprint cadence design Recurring sprint schedule, team roles, and review process
03
Phase 3

Operationalize

Context Blocks in a folder are not a supply chain. This phase builds the operational infrastructure that makes context production repeatable, auditable, and scalable across the organization: the Context Supply Chain.

Context Warehouse

The Context Warehouse is the governed repository where all Context Blocks live. It is version-controlled, access-controlled, and structured so AI systems can retrieve the right context reliably. It is not a general document library; every block has a defined type, owner, validation status, and provenance record. This is what makes AI inputs auditable.

AI Bill of Materials (AI BoM)

The AI BoM is a structured inventory of every context input, data source, decision point, and constraint that feeds a specific AI use case. It is the documentation artifact required by EU AI Act Articles 9, 11, and 13, and maps directly to ISO/IEC 42001 Clause 8.4. When an AI output fails, the AI BoM is the audit trail that traces the failure to its context root cause.

Context Supply Chain design

The Context Supply Chain defines how organizational knowledge flows from source to AI input: who produces each context type, how it is validated, where it is stored, how it is retrieved, and how it is updated. Designing this chain removes the ad hoc, person-dependent context gathering that causes AI accuracy to vary by user and by day.

Phase 3 deliverables
Context Warehouse (v1) Governed repository structure with initial block population
AI Bill of Materials Compliance-ready documentation for priority use cases
Context Supply Chain map End-to-end flow from knowledge source to AI input, with owners
04
Phase 4

Govern and scale

Governance is not a compliance wrapper applied at the end. It is the operating model that allows everything built in phases 1 through 3 to run reliably, improve continuously, and withstand regulatory scrutiny, at scale, across every use case in the organization.

Context Center of Excellence (CoE)

The CoE is the organizational function that owns context management permanently. It staffs certified Curators and Controllers, runs the Sprint cadence, maintains the Warehouse, manages the AI BoM portfolio, and enforces ARPO quality standards across all AI use cases. This is what transforms AI context management from a project into a capability.

KCBS compliance alignment

The Kendall Context Block Specification (KCBS) is the open standard that governs how Context Blocks are structured, versioned, and transferred. KCBS alignment enables interoperability with regulatory frameworks including EU AI Act Articles 9, 11, and 13, and provides the structured schema that ISO/IEC 42001 management system controls require.

Continuous improvement loop

AI accuracy is not a fixed target; context drifts as the organization evolves. The CoE runs quarterly ARPO audits, refreshes the AI BoM when use cases change, and maintains a live accuracy baseline for every production AI system. The framework is designed for continuous improvement, not set-and-forget deployment.

Phase 4 deliverables
Context CoE operating model Roles, governance policies, sprint cadence, and accountability structure
KCBS alignment documentation Compliance mapping for EU AI Act and ISO/IEC 42001
AI accuracy baseline Measured accuracy benchmarks and quarterly improvement targets
Kendall Framework: Context Blocks
Context Blocks

32 block types. 6 categories. One standard.

Context Blocks are the atomic unit of the Kendall Framework. Each block is a modular, standardized, version-controlled piece of organizational knowledge, structured so AI systems can retrieve, validate, and use it reliably.

A Context Block is not a document or a prompt snippet. It has a defined type, owner, validation status, provenance record, and expiry policy. That structure is what makes AI inputs auditable and AI outputs traceable.

The 32 block types span 6 categories, covering every form of organizational context an AI system needs to perform accurately in an enterprise environment.

Example: Process Context Block
Type Process Block
Owner J. Chen, KCCC
Status Validated
Version v2.1 / Mar 2026
Provenance Operations Manual §4.2
Expiry Review in 90 days
Process
7 block types
Standard operating procedure
Decision workflow
Approval chain
Escalation path
Exception handling
Quality checkpoint
Process constraint
People
5 block types
Role definition
Responsibility matrix
Expertise profile
Authority level
Team structure
Problems
5 block types
Problem statement
Root cause record
Known failure pattern
Risk register entry
Incident history
Goals
4 block types
Strategic objective
Success metric
Constraint boundary
Outcome definition
Governance
6 block types
Policy rule
Compliance requirement
Audit requirement
Data classification
Regulatory boundary
Ethical constraint
Specifications
5 block types
Technical requirement
Data schema
Output format
Integration spec
Quality standard
Kendall Framework: ARPO Quality Gates
ARPO quality gates

The four root causes of AI accuracy failure

Every enterprise AI failure (inconsistent outputs, hallucinations, wrong answers, compliance gaps) traces back to one of four context problems. ARPO names them, defines what failure looks like, and specifies the gate that catches each one before it reaches production.

A

Access

The AI system cannot reach the organizational knowledge it needs because it has not been given access, the knowledge exists in an incompatible format, or it has never been structured as retrievable context at all.

Failure mode

AI produces generic or outdated answers because it is working from public training data rather than current organizational knowledge. The information exists inside the organization but is locked in documents, emails, or tacit knowledge.

Access gate

Verify that every required Context Block type for this use case exists in the Context Warehouse with current status. Map gaps to the Context Sprint backlog before deployment.

R

Retrieval

Context Blocks exist and are accessible, but the wrong ones are being retrieved, or the right ones are being retrieved in the wrong order, at the wrong time, or without adequate specificity for the task.

Failure mode

AI retrieves broadly relevant but not precisely correct context, producing answers that are plausible but wrong for the specific situation, role, or constraint at hand. Especially common in RAG implementations with poor chunking or metadata.

Retrieval gate

Validate retrieval accuracy against a test set of known-correct context pairings before production. Audit Context Block metadata, tagging, and chunking structure for retrieval precision.

P

Provenance

The AI system cannot determine where a piece of context came from, who validated it, when it was last reviewed, or whether it is still current, making it impossible to audit outputs or trace failures back to their source.

Failure mode

AI uses outdated, contradictory, or unvalidated context and the organization has no way to identify this after the fact. Regulatory auditors cannot trace AI outputs to their source inputs. Compliance programs cannot be demonstrated.

Provenance gate

Every Context Block must carry complete provenance metadata: source document, author, validation date, validator identity, version number, and expiry policy. Blocks without complete provenance fail this gate and cannot enter production.

O

Oversight

No human checkpoint exists between context input and AI output, or between AI output and consequential action, meaning errors propagate without detection and accountability is unclear when something goes wrong.

Failure mode

AI errors reach customers, decisions, or records without human review. When failures occur, no audit trail exists to identify where oversight broke down. EU AI Act Article 13 transparency requirements cannot be met.

Oversight gate

Every production AI use case must have a defined human review point, a named responsible party, an escalation path, and a documented override procedure. These are captured in the AI BoM and reviewed quarterly by the Context Controller.

A
Access
R
Retrieval
P
Provenance
O
Oversight
Kendall Framework: Methodological Foundations
Methodological foundations

Built on the world's most proven operating disciplines

The Kendall Framework does not reinvent the wheel. It applies six decades of proven operational methodology to the specific problem of enterprise AI context management. Each discipline contributes a specific, non-duplicated capability.

Lean Manufacturing

Waste elimination in the context pipeline

Lean's core insight, that value is defined by the end user and everything else is waste, applies directly to context management. Most AI context pipelines are full of waste: redundant documents, outdated information, inconsistent formats, manual re-entry. Kendall applies Lean to remove everything that does not add value to AI accuracy.

Contributes: Context Supply Chain efficiency, waste identification, value stream mapping
Total Quality Management

Quality as a system, not a checkpoint

TQM established that quality cannot be inspected in after the fact; it must be designed into the process. ARPO quality gates are Kendall's application of TQM: quality control embedded at each stage of the context pipeline rather than applied as a final review. Every Context Block is a quality-managed unit with defined standards.

Contributes: ARPO gate structure, continuous improvement loop, measurable accuracy standards
Agile and Scrum

Short cycles that produce usable outputs fast

Context development cannot be a waterfall project with an eighteen-month delivery timeline. Context Sprints apply Agile discipline: two-week cycles, defined scope, daily standups, sprint reviews, and retrospectives. Teams produce real Context Blocks in real use cases from the first sprint, learning and improving continuously rather than waiting for a big-bang delivery.

Contributes: Context Sprint methodology, iterative delivery, team velocity measurement
Design Thinking

Context mapped from human reality, not org charts

Design Thinking insists on understanding the user before designing the solution. Kendall applies this to context mapping: Context Blocks are built from how work actually happens, not from how the org chart says it should happen. Role Blocks, for example, are built from what the role actually does in practice, including the informal knowledge and judgment calls that formal documentation misses.

Contributes: Role Block construction, context mapping workshops, human-centered problem framing
ISO/IEC 42001

The international standard for AI management systems

ISO/IEC 42001 is the first and only certifiable international standard for AI management systems. The Kendall Framework is designed from the ground up to produce the evidence and documentation that 42001 requires: risk management records (Clause 6.1), lifecycle documentation (Clause 8.4), monitoring evidence (Clause 9.1), and corrective action trails (Clause 10.2). The AI BoM is the primary compliance artifact.

Contributes: Compliance architecture, audit trail design, governance documentation standards
Governance by design

Compliance built in, not bolted on

Most governance programs are built after deployment, when problems have already occurred and regulators are asking questions. Kendall's governance-by-design principle embeds compliance requirements, including EU AI Act, ISO/IEC 42001, and GDPR data provenance, into the Context Block structure itself. Governance is not an additional layer; it is a property of every block in the Warehouse.

Contributes: KCBS compliance schema, EU AI Act alignment, provenance-by-default block design
Kendall Framework: Seven Principles
Seven principles

The operating principles of the Kendall Framework

These are not values statements. They are the structural principles that explain why the framework is designed the way it is, and why departing from them produces the inconsistent, unscalable AI results most organizations are experiencing.

1

Context is king

AI only becomes reliably useful when it understands the specific organizational context it is operating in. Generic intelligence without organizational context produces generic outputs. Structured context turns a general-purpose model into a purpose-built organizational tool. Every other principle flows from this one.

Watch the video
2

Language is the raw material of AI

In AI systems, language is not the interface; it is the material. The precision, consistency, and structure of the language feeding an AI system directly determines the precision of its outputs. Ambiguous inputs produce ambiguous outputs, every time. Kendall builds language precision into Context Blocks as a structural property, not a style guideline.

Watch the video
3

Problems fuel AI

AI initiatives that start with tools or capabilities drift. AI initiatives that start with specific, well-defined organizational problems produce measurable results. Problem Blocks are a core Context Block type because a clearly stated problem is the most important input an AI system receives; it defines what good output looks like before any generation occurs.

Watch the video
4

"Who" anchors AI

AI outputs are not context-free. The same question means different things depending on who is asking, what their role is, what they are authorized to act on, and what constraints they operate under. Role Blocks are what anchor AI to the specific "who", enabling outputs that are calibrated to the actual person and their actual situation, not a generic user.

Watch the video
5

AI needs to know your rules to play your game

Every organization has policies, values, legal boundaries, regulatory constraints, and operational rules that AI must respect. These do not exist in any training dataset. They must be explicitly structured as Governance Blocks and delivered as context. An AI system operating without your rules is not your AI; it is a generic assistant that happens to be running inside your organization.

Watch the video
6

Assemble AI like a truck

A truck is not one component; it is hundreds of precisely engineered parts assembled to a standard. AI systems that perform reliably at scale are built the same way: from modular, standardized, interchangeable context components that can be assembled, validated, updated, and swapped independently. This is why Context Blocks are modular by design. Monolithic context cannot be maintained at enterprise scale.

Watch the video
7

AI is a team sport

No individual has all the context an enterprise AI system needs. Process knowledge lives with operators. Policy knowledge lives with legal and compliance. Customer knowledge lives with frontline teams. Role knowledge lives with managers. Building a complete Context Supply Chain requires structured collaboration across all of these, and certification ensures that everyone contributing to that supply chain is doing so with the same methodology and standards.

Watch the video
The habit

Continuous improvement is non-negotiable

AI accuracy is not a one-time achievement. Organizations change. Processes evolve. Regulations update. The Context Supply Chain must be maintained with the same discipline as any other operational system: regular audits, version updates, Sprint cadences, and accuracy measurement. The CoE exists to make this a habit, not an emergency response.

Download Our Latest Whitepaper “The Strategic Governance Manifesto” Download