Most AI programs stall because the organizational context feeding them is broken. The Kendall Framework is the methodology that fixes it: four phases, structured context types, quality gates, and governance built in from the start.
Each phase has a defined set of inputs, activities, tools, and deliverables. Together they form a repeatable operating loop, not a one-time project.
Before anything can be fixed, the specific failure points in your AI context pipeline need to be mapped. Diagnosis is not a generic assessment; it produces a structured, prioritized evidence base that drives every decision that follows.
A structured facilitated session that maps the full context landscape of a specific AI use case. Participants document the organizational knowledge the AI is expected to use, where it currently comes from, and where the gaps and inconsistencies are. The output is a Context Gap Map: the evidence base for every fix that follows.
Every AI accuracy failure traces back to one or more of four root causes: Access (the AI cannot reach the information), Retrieval (the AI retrieves the wrong information), Provenance (the AI cannot verify where the information came from), or Oversight (no human checkpoint exists). The ARPO analysis categorizes each identified gap and produces a prioritized remediation roadmap.
Not every AI use case is worth fixing first. Kendall's prioritization scoring evaluates each use case against business impact, context complexity, and remediation effort, producing a ranked pipeline that focuses energy where return is highest.
External consultants cannot run your Context Supply Chain. The goal of this phase is to develop certified internal practitioners who own context management as a permanent organizational function, not a project that ends when the engagement does.
Context Curators are the practitioners who build and maintain Context Blocks day-to-day. They run Context Sprints, apply the Kendall Prompt Format, identify ARPO failure points in live use cases, and contribute to the AI Bill of Materials. KCCC certification is the foundation credential that every context-management function is built on.
Context Controllers govern the context architecture across multiple use cases and business units. They design the Context Warehouse structure, run ARPO quality audits, establish governance policies, and own the organization's AI Bill of Materials at the program level. KCC is the governance credential for CDAOs, AI risk leaders, and program architects.
Context Sprints are short, structured work cycles (typically one to two weeks) in which a team of Curators produces a defined set of Context Blocks for a specific use case. Sprint methodology applies Agile discipline to context development: scoping, building, review, and validation in a tight loop that produces usable outputs fast.
Context Blocks in a folder are not a supply chain. This phase builds the operational infrastructure that makes context production repeatable, auditable, and scalable across the organization: the Context Supply Chain.
The Context Warehouse is the governed repository where all Context Blocks live. It is version-controlled, access-controlled, and structured so AI systems can retrieve the right context reliably. It is not a general document library; every block has a defined type, owner, validation status, and provenance record. This is what makes AI inputs auditable.
The AI BoM is a structured inventory of every context input, data source, decision point, and constraint that feeds a specific AI use case. It is the documentation artifact required by EU AI Act Articles 9, 11, and 13, and maps directly to ISO/IEC 42001 Clause 8.4. When an AI output fails, the AI BoM is the audit trail that traces the failure to its context root cause.
The Context Supply Chain defines how organizational knowledge flows from source to AI input: who produces each context type, how it is validated, where it is stored, how it is retrieved, and how it is updated. Designing this chain removes the ad hoc, person-dependent context gathering that causes AI accuracy to vary by user and by day.
Governance is not a compliance wrapper applied at the end. It is the operating model that allows everything built in phases 1 through 3 to run reliably, improve continuously, and withstand regulatory scrutiny, at scale, across every use case in the organization.
The CoE is the organizational function that owns context management permanently. It staffs certified Curators and Controllers, runs the Sprint cadence, maintains the Warehouse, manages the AI BoM portfolio, and enforces ARPO quality standards across all AI use cases. This is what transforms AI context management from a project into a capability.
The Kendall Context Block Specification (KCBS) is the open standard that governs how Context Blocks are structured, versioned, and transferred. KCBS alignment enables interoperability with regulatory frameworks including EU AI Act Articles 9, 11, and 13, and provides the structured schema that ISO/IEC 42001 management system controls require.
AI accuracy is not a fixed target; context drifts as the organization evolves. The CoE runs quarterly ARPO audits, refreshes the AI BoM when use cases change, and maintains a live accuracy baseline for every production AI system. The framework is designed for continuous improvement, not set-and-forget deployment.
Context Blocks are the atomic unit of the Kendall Framework. Each block is a modular, standardized, version-controlled piece of organizational knowledge, structured so AI systems can retrieve, validate, and use it reliably.
A Context Block is not a document or a prompt snippet. It has a defined type, owner, validation status, provenance record, and expiry policy. That structure is what makes AI inputs auditable and AI outputs traceable.
The 32 block types span 6 categories, covering every form of organizational context an AI system needs to perform accurately in an enterprise environment.
Every enterprise AI failure (inconsistent outputs, hallucinations, wrong answers, compliance gaps) traces back to one of four context problems. ARPO names them, defines what failure looks like, and specifies the gate that catches each one before it reaches production.
The AI system cannot reach the organizational knowledge it needs because it has not been given access, the knowledge exists in an incompatible format, or it has never been structured as retrievable context at all.
AI produces generic or outdated answers because it is working from public training data rather than current organizational knowledge. The information exists inside the organization but is locked in documents, emails, or tacit knowledge.
Verify that every required Context Block type for this use case exists in the Context Warehouse with current status. Map gaps to the Context Sprint backlog before deployment.
Context Blocks exist and are accessible, but the wrong ones are being retrieved, or the right ones are being retrieved in the wrong order, at the wrong time, or without adequate specificity for the task.
AI retrieves broadly relevant but not precisely correct context, producing answers that are plausible but wrong for the specific situation, role, or constraint at hand. Especially common in RAG implementations with poor chunking or metadata.
Validate retrieval accuracy against a test set of known-correct context pairings before production. Audit Context Block metadata, tagging, and chunking structure for retrieval precision.
The AI system cannot determine where a piece of context came from, who validated it, when it was last reviewed, or whether it is still current, making it impossible to audit outputs or trace failures back to their source.
AI uses outdated, contradictory, or unvalidated context and the organization has no way to identify this after the fact. Regulatory auditors cannot trace AI outputs to their source inputs. Compliance programs cannot be demonstrated.
Every Context Block must carry complete provenance metadata: source document, author, validation date, validator identity, version number, and expiry policy. Blocks without complete provenance fail this gate and cannot enter production.
No human checkpoint exists between context input and AI output, or between AI output and consequential action, meaning errors propagate without detection and accountability is unclear when something goes wrong.
AI errors reach customers, decisions, or records without human review. When failures occur, no audit trail exists to identify where oversight broke down. EU AI Act Article 13 transparency requirements cannot be met.
Every production AI use case must have a defined human review point, a named responsible party, an escalation path, and a documented override procedure. These are captured in the AI BoM and reviewed quarterly by the Context Controller.
The Kendall Framework does not reinvent the wheel. It applies six decades of proven operational methodology to the specific problem of enterprise AI context management. Each discipline contributes a specific, non-duplicated capability.
Lean's core insight, that value is defined by the end user and everything else is waste, applies directly to context management. Most AI context pipelines are full of waste: redundant documents, outdated information, inconsistent formats, manual re-entry. Kendall applies Lean to remove everything that does not add value to AI accuracy.
TQM established that quality cannot be inspected in after the fact; it must be designed into the process. ARPO quality gates are Kendall's application of TQM: quality control embedded at each stage of the context pipeline rather than applied as a final review. Every Context Block is a quality-managed unit with defined standards.
Context development cannot be a waterfall project with an eighteen-month delivery timeline. Context Sprints apply Agile discipline: two-week cycles, defined scope, daily standups, sprint reviews, and retrospectives. Teams produce real Context Blocks in real use cases from the first sprint, learning and improving continuously rather than waiting for a big-bang delivery.
Design Thinking insists on understanding the user before designing the solution. Kendall applies this to context mapping: Context Blocks are built from how work actually happens, not from how the org chart says it should happen. Role Blocks, for example, are built from what the role actually does in practice, including the informal knowledge and judgment calls that formal documentation misses.
ISO/IEC 42001 is the first and only certifiable international standard for AI management systems. The Kendall Framework is designed from the ground up to produce the evidence and documentation that 42001 requires: risk management records (Clause 6.1), lifecycle documentation (Clause 8.4), monitoring evidence (Clause 9.1), and corrective action trails (Clause 10.2). The AI BoM is the primary compliance artifact.
Most governance programs are built after deployment, when problems have already occurred and regulators are asking questions. Kendall's governance-by-design principle embeds compliance requirements, including EU AI Act, ISO/IEC 42001, and GDPR data provenance, into the Context Block structure itself. Governance is not an additional layer; it is a property of every block in the Warehouse.
These are not values statements. They are the structural principles that explain why the framework is designed the way it is, and why departing from them produces the inconsistent, unscalable AI results most organizations are experiencing.
AI only becomes reliably useful when it understands the specific organizational context it is operating in. Generic intelligence without organizational context produces generic outputs. Structured context turns a general-purpose model into a purpose-built organizational tool. Every other principle flows from this one.
Watch the videoIn AI systems, language is not the interface; it is the material. The precision, consistency, and structure of the language feeding an AI system directly determines the precision of its outputs. Ambiguous inputs produce ambiguous outputs, every time. Kendall builds language precision into Context Blocks as a structural property, not a style guideline.
Watch the videoAI initiatives that start with tools or capabilities drift. AI initiatives that start with specific, well-defined organizational problems produce measurable results. Problem Blocks are a core Context Block type because a clearly stated problem is the most important input an AI system receives; it defines what good output looks like before any generation occurs.
Watch the videoAI outputs are not context-free. The same question means different things depending on who is asking, what their role is, what they are authorized to act on, and what constraints they operate under. Role Blocks are what anchor AI to the specific "who", enabling outputs that are calibrated to the actual person and their actual situation, not a generic user.
Watch the videoEvery organization has policies, values, legal boundaries, regulatory constraints, and operational rules that AI must respect. These do not exist in any training dataset. They must be explicitly structured as Governance Blocks and delivered as context. An AI system operating without your rules is not your AI; it is a generic assistant that happens to be running inside your organization.
Watch the videoA truck is not one component; it is hundreds of precisely engineered parts assembled to a standard. AI systems that perform reliably at scale are built the same way: from modular, standardized, interchangeable context components that can be assembled, validated, updated, and swapped independently. This is why Context Blocks are modular by design. Monolithic context cannot be maintained at enterprise scale.
Watch the videoNo individual has all the context an enterprise AI system needs. Process knowledge lives with operators. Policy knowledge lives with legal and compliance. Customer knowledge lives with frontline teams. Role knowledge lives with managers. Building a complete Context Supply Chain requires structured collaboration across all of these, and certification ensures that everyone contributing to that supply chain is doing so with the same methodology and standards.
Watch the videoAI accuracy is not a one-time achievement. Organizations change. Processes evolve. Regulations update. The Context Supply Chain must be maintained with the same discipline as any other operational system: regular audits, version updates, Sprint cadences, and accuracy measurement. The CoE exists to make this a habit, not an emergency response.
The form has been successfully submitted.
Our excellent customer support team is ready to help.
Our excellent customer support team is ready to help.
Our excellent customer support team is ready to help.