The organization's rules, standards, and governance frameworks documented, accessible, and current. The right people know what the rules are and what AI can and cannot touch, without having to ask.
Most organizations have policies. The problem is that they are not findable by the people they govern. An employment policy lives in a PDF sent during onboarding and never referenced again. A data handling policy lives in a legal document that the engineering team has never read. An expense approval policy exists in three different versions across three different documents, and nobody knows which is current. The Policy Index does not create new policies. It creates the structured, accessible record of policies that already exist. When the index is functioning, a person who needs to know what the rule is can find it without asking. For AI, the Policy Index is the control layer. As AI operates inside organizational systems, it needs explicit guidance on what it can access, what it can act on, and where human authorization is required. Without a Policy Index, AI governance is informal, inconsistent, and effectively ungovernable.
The Policy Index is a core dimension of the Awareness phase in the Ragsdale Framework for Autonomization, developed by Marc Ragsdale. Prospus implements it as part of structured AI transformation engagements. Kaamfu operationalizes it as a native layer inside the Autonomous Operating Environment, providing the explicit governance framework that makes autonomous AI operation safe.
Policies exist in documents scattered across shared drives, legal folders, and email attachments. Employees do not know where to find them. AI systems have no explicit governance framework. Policy violations happen because people did not know the rule existed, not because they chose to break it.
All policies are documented in a structured, accessible index. The people they govern can find them without asking. AI governance is explicit: what AI can access, what it can act on, and where human authorization is required. Policy violations from ignorance are structurally prevented.
| Metric | Before | After |
|---|---|---|
| Findability | Finding a policy requires knowing who to ask and hoping the version they surface is current. An employee needs to know the approval threshold for a vendor contract. Three people give three different answers. The actual policy is in a legal folder nobody knew existed. |
The Policy Index is queryable by the people it governs. The current version is always surfaced. |
| AI Governance | AI systems have no explicit governance framework. What AI can access and act on is determined informally and inconsistently. An AI agent is given access to a data system without explicit governance defining what it can do with that data. The scope of access is assumed rather than defined. |
AI governance is explicit in the Policy Index. What AI can access, what it can act on, and where human authorization is required is documented and enforced. |
| Version Control | Multiple versions of the same policy coexist across shared drives. Employees follow whichever version they found first. An expense policy has three versions across three shared drives. Different teams follow different versions. Inconsistency is discovered during an audit. |
The Policy Index maintains one authoritative version of every policy. Superseded versions are archived with a link to the current version. |
| Policy Violations | Policy violations happen because people did not know the rule existed, not because they chose to break it. | Policies are accessible to the people they govern. Violations from ignorance are structurally prevented. |
| Compliance Coverage | Compliance policies are accurate but accessible only to legal and compliance teams. The operational teams whose work they govern have not read them. | Compliance policies are summarized in plain language in the Policy Index. The people they govern can find and read them without legal support. |
A score of 10 on the Policy Index means this dimension is fully resolved and no longer a constraint on the phases that follow. Here is what that requires in practice.
Run a discovery pass to find every policy that currently governs organizational behavior: HR policies, operational standards, data handling guidelines, security policies, compliance requirements, and any informal rules that have calcified into practice.
Assess each policy for currency and accessibility. Is it current? Is it findable by the people it governs? Is the authoritative version clear? Document what exists before attempting to consolidate.
Build the Policy Index as a structured, accessible record. Each entry includes the policy in plain language, who owns it, who it applies to, and when it was last reviewed.
Build AI Data Exposure Governance explicitly. Define what data AI can access, what actions AI can take, what decisions AI can influence, and where human authorization is required.
Make the index queryable. Inside Kaamfu for organizations on the platform, or as a standalone natural language interface for those that are not.
Policies get documented but AI Data Exposure Governance never gets built. Documenting existing HR and operational policies is straightforward. Defining what AI can and cannot touch requires decisions that most organizations have not made yet. That governance gap is the reason the Policy Index matters most as AI capability expands. Without it, every AI deployment is operating without explicit boundaries.
The Autonomy Diagnostic scores the Policy Index on a 0 to 10 scale. Each point reflects a specific observable state in your organization.
Policies exist in scattered documents. Most employees do not know where to find them. No AI governance framework exists.
Major HR and operational policies are documented but stored in locations not accessible to most employees.
Policies are documented and stored in a shared location. No structured index exists. Finding a specific policy requires searching.
A policy document library exists with basic organization. Multiple versions of some policies coexist. AI governance is absent.
Policies are organized by category and mostly current. Ownership is defined for most policies. AI governance is informal.
A structured Policy Index exists covering all major policy areas. AI governance is partially defined.
The Policy Index is complete, current, and accessible. AI Data Exposure Governance is explicitly defined.
All policies are in the index with ownership, review cadence, and version control. AI governance covers all deployed AI systems.
The Policy Index is queryable by employees and AI agents. Policies are surfaced proactively when relevant to a decision or action.
All organizational policies are documented, current, accessible, and owned. AI governance is explicit and covers what AI can access, act on, and where human authorization is required. Policy violations from ignorance are structurally prevented.
No. The Policy Index creates the structured, accessible record of policies that already exist. Most organizations have the policies they need. They do not have a findable, current version that the people they govern can access without asking HR or legal.
The explicit record of what data AI can access, what actions AI can take, what decisions AI can influence, and where human authorization is required before AI acts. It is the control layer that makes autonomous AI operation safe. Without it, every AI deployment operates without explicit boundaries.
A policy handbook is a document. The Policy Index is a structured, queryable record where policies are organized by scope, applicability, and topic so that a person or AI agent can find the relevant policy for their situation without reading the entire handbook.
Outdated policies are archived with a link to the current version. The Policy Index always surfaces the authoritative current version first. Archived versions are preserved for historical and audit purposes but clearly marked as superseded.
AI agents operating without explicit policy boundaries are operating without governance. The Policy Index provides the control layer that AI systems read before taking actions. What they can access, what they can do, and where they must stop and ask a human is defined explicitly rather than assumed.
Ownership varies by policy type. HR owns employment policies. Legal owns compliance policies. Engineering owns technical standards. The Policy Index has an overall owner responsible for the index architecture, but individual policies are owned by the function responsible for them.
Each policy has its own review cadence defined in its record. Employment policies may review annually. Security policies may review quarterly. AI governance requires review whenever AI capability or organizational scope changes.
The discovery pass surfaces these. Informal rules that govern organizational behavior are documented in the Policy Index and given an owner. An unwritten rule is a governance risk because it cannot be enforced consistently, updated deliberately, or accessed by AI systems.
Policy records should be written in plain, declarative language: what is permitted, what is not permitted, and what requires human authorization. Avoid legal language and exceptions-laden prose. AI systems reason from clear rules better than from hedged guidance.
A score of 7 or above means all significant policies are documented and findable, and AI Data Exposure Governance is explicitly defined. The Autonomy Diagnostic will tell you which dimensions need the most attention first.
The Autonomy Diagnostic scores every dimension of the Ragsdale Framework and tells you exactly where to focus first.
Take the Diagnostic