STRATEGIC CONTEXT

What is an Organisational Intelligence Platform?

An Organisational Intelligence Platform is a structured approach to capturing, organising, and progressively automating the operational knowledge and processes of an organisation using artificial intelligence. It is not a chatbot. It is not a single AI tool bolted onto existing systems. It is the operating system through which an organisation understands, manages, and continuously improves every process it performs.

Why This Matters
Most AI implementations in government fail because they start with technology and work backwards to find problems to solve. The OIP inverts this: it starts with a complete understanding of what the organisation actually does, classifies each process by its readiness and risk profile, and only then applies AI at the appropriate level of autonomy. The result is an AI capability that is safe, governable, measurable, and genuinely useful.

The Problem

Public sector and critical infrastructure organisations face a convergence of pressures that create both urgency and opportunity. Ageing workforces carry irreplaceable institutional knowledge that walks out the door with departing staff. Increasing regulatory complexity demands detailed, auditable documentation of how decisions are made. Rising community expectations and environmental accountability require transparent, evidence-based operations. Constrained budgets demand efficiency. And the rapid emergence of AI capabilities promises transformation but carries real risks if deployed without careful governance.

The Silver Report amplifies every one of these pressures. When water corporations merge, institutional knowledge gets lost because it was never documented. Corporate services get consolidated first, operational functions get reviewed for duplication, and frontline service delivery gets restructured. At every stage, the people who know how things work may not be there to explain it to their replacements or to demonstrate why their approach is better than alternatives.

Current approaches to AI adoption in the sector are fragmented and ungoverned. Different teams adopt different tools for different purposes. There is no central inventory of what the organisation does, what data it owns, or what processes are ready for AI augmentation. Governance is reactive rather than proactive. The result is a landscape of disconnected AI experiments with no coherent operational value.

The Solution: Four Integrated Layers

The OIP addresses these challenges through four integrated layers that work together as a complete system:

Process Register
A definitive catalogue of every process the organisation performs, structured by division, department, and function. This has standalone value regardless of AI. In a consolidation scenario, this becomes the organisation's negotiating asset: a definitive map of what it does and why.
Knowledge System
Documented context, standards, templates, decision rules, and institutional knowledge for each process. This captures the expertise that currently exists only in the heads of experienced staff. This is the layer that protects the organisation during structural reform.
AI Execution Layer
Intelligent agents at appropriate autonomy levels that assist, augment, or automate processes where it is safe, ready, and valuable to do so. Agents are assigned to processes only after process standardisation and knowledge capture are complete.
Governance Framework
Classification, oversight, and accountability structures that ensure AI is deployed responsibly, with consequence-appropriate controls at every level. Every agent is owned, every output is auditable, and every process has clear escalation paths.

Even if no agents are ever deployed, an organisation that has documented its processes, decision rules, and contextual knowledge is materially more resilient than one that has not.

Intended Audience

This document is written for three audiences simultaneously, and the reading path depends on your role:

Executive Sponsors and Board Members
You need to understand the strategic intent, risk posture, and governance principles. Focus on Chapters 1-3, then Chapter 9 for implementation approach. This reading takes approximately 45 minutes and provides sufficient understanding to make funding and organisational support decisions.
Technical Leaders and Architects
You need to assess architecture, integration requirements, and platform design implications. Focus on Chapters 4-8 for process and technology positioning. This reading takes approximately 90 minutes and provides the technical foundation for detailed implementation planning.
Implementation Teams
You need operational guidance for delivery, process documentation, and agent development. Focus on Chapters 5-10 with references to Chapters 12-15. This is your complete operational playbook for building the OIP phase by phase.

Key Definitions and Terminology

The following terms have specific meanings within this framework. Consistent use is essential to avoid the ambiguity that undermines most AI governance efforts:

Term Definition
Organisational Intelligence Platform (OIP) The complete system comprising Process Register, Knowledge System, AI Execution Layer, and Governance Framework. Not a product name but a category of capability.
Process A discrete, repeatable unit of work that produces a defined output. Processes exist at varying levels of granularity. Within the OIP, a process is the atomic unit to which classification, documentation, and agents are applied.
Agent An AI-powered capability assigned to a specific process or group of processes. Agents operate at defined autonomy levels (0 to 5) and are subject to governance controls appropriate to their consequence rating.
Autonomy Level A classification (Level 0 to 5) that defines the degree of independent action an agent can take. Higher levels mean more independent decision-making and action without human review.
Consequence Rating An assessment of the potential impact if an agent produces an incorrect or unintended output. Rated as Low, Medium, High, or Critical. This ceiling constrains maximum autonomy.
Readiness Score An assessment (1-5) of how prepared a process is for AI augmentation today, considering data availability, system APIs, documentation quality, and process standardisation.
Process Register The structured catalogue of all organisational processes, organised by division, department, and function. The taxonomic backbone of the OIP.
Knowledge System The documented context, rules, standards, templates, and institutional knowledge associated with each process. The intelligence layer that gives agents their domain expertise.
System Prompt The structured instruction set that defines an agent's role, context, constraints, and expected behaviour for a specific process. The primary mechanism through which agents are configured.
Human-in-the-Loop (HITL) A governance control requiring human review and approval before an agent's output is actioned or an agent takes a consequential action. The default for all agents at Level 2 and above.
Tool Use The capability of an AI agent to interact with external systems (databases, APIs, applications) through defined interfaces. Distinct from text generation alone.

How the OIP Works in Practice

Understanding the OIP in abstract is useful, but the real value emerges when you see how it works with actual processes. Consider a water utility processing a customer complaint about water quality. Here is how the OIP manages this process:

Phase 1: Process Documentation

The customer service team documents the "Respond to Water Quality Complaint" process. They identify five key decision points: Is the complaint about taste/odour or about safety? Has this complaint been received before? Is the issue in the customer's internal plumbing? Is water quality data available from network monitoring? What response is required by regulation?

For each decision point, they document the decision rule. For taste/odour complaints, the response differs from safety complaints. Safety complaints must be escalated to operations within 1 hour. Taste/odour complaints can sometimes be resolved by customer education about internal plumbing issues.

They document existing workarounds and exceptions. Sometimes when system data is unavailable, staff call a particular experienced technician for informal advice. They document the informal criteria this technician uses, so that knowledge is no longer trapped in one person's head.

Phase 2: AI Agent Capability

Once the process is fully documented, it becomes a candidate for a Level 2 AI agent. The agent is trained on the documented decision rules. When a new complaint arrives, the Level 2 agent: reads the complaint details, queries water quality monitoring data, checks complaint history, retrieves relevant regulations, proposes a response, and asks a human to review and approve before sending.

The Level 2 agent cannot send the response on its own (that would require Level 3). But it can prepare the response, do the information gathering, and propose the escalation decision. The human spends 2 minutes reviewing instead of 20 minutes researching and drafting.

Phase 3: Organisational Resilience

Six months later, the experienced technician retires. Without documentation, that person's informal decision rules would be lost. With the OIP, those rules are embedded in the documented process and in the agent's system prompt. The organisation is not dependent on that one person. New staff can learn from the documented process. The agent maintains consistency.

Phase 4: Consolidation Advantage

When the water corporation consolidates with four others into a Western Regional partnership, the partnership must decide how to handle customer complaints. Barwon Water can demonstrate its process, its decision rules, the rationale behind them, and show that the AI agent processes complaints consistently. If the partnership's process is less rigorous, Barwon Water's approach becomes the model. If the partnership's process is better, Barwon Water has the documented foundation to adopt and improve it.

Why This Approach Works for Government and Critical Infrastructure

The OIP is deliberately designed for environments with high accountability, complex regulation, and public trust requirements. This is why it succeeds where other AI implementations fail in the public sector:

Accountability is Built In: In the private sector, an incorrect AI recommendation can be costly. In government, it can be a scandal. In critical infrastructure, it can be dangerous. The OIP makes clear who is accountable for each agent and each decision. Every output is auditable. The government can explain why a decision was made, who approved it, and what rules governed it.

Regulation is a Feature, Not a Constraint: Regulated environments need documentation. Privacy, safety, environmental, and financial regulations all demand that organisations explain their decisions. The OIP requires this documentation anyway. Compliance becomes easier, not harder.

Institutional Knowledge is Preserved: Government and critical infrastructure organisations lose people. Staff retire, move between agencies, or leave for the private sector. Knowledge walks out the door. The OIP captures that knowledge in a form that survives staff turnover. The organisation becomes less dependent on individuals and more dependent on systems and processes.

Change Management is Smoother: When organisations restructure (like the water corporation consolidation), people resist change because they fear losing control or identity. But if the new organisation is built on the documented processes of the existing organisation, people see continuity rather than disruption. The documented process is the bridge between old and new.

GOVERNING PRINCIPLES

Eight Governing Principles

These principles are non-negotiable. They govern every decision about what to build, how to build it, and how it operates in production. They are ordered by priority: where principles conflict, lower-numbered principles take precedence over higher-numbered ones.

PRINCIPLE 01
Institutional Knowledge Capture is the Primary Objective
The Knowledge System exists to preserve and operationalise institutional knowledge that currently resides in the heads of experienced staff. This objective has value independent of AI. Even if no agents are ever deployed, an organisation that has documented its processes, decision rules, and contextual knowledge is materially more resilient than one that has not. In the context of the Silver Report's recommendations for water corporation consolidation, this principle takes on urgent strategic significance. The Process Register becomes the definitive map of what the organisation does. The Knowledge System preserves why it does it that way. Together, they ensure that institutional intelligence survives structural change.
Application
Every process capture activity prioritises extracting decision rules, exception handling, context, and rationale. Documentation is written for both AI execution and human understanding. When five water corporations consolidate into a Western Regional partnership, Barwon Water's documented processes and knowledge system become its negotiating asset: proof of what it does, why it does it that way, and what the consequence would be of changing it.
PRINCIPLE 02
Human Accountability is Non-Delegable
A named human being is accountable for every output and action of every agent, at every autonomy level, at all times. AI does not hold accountability. Accountability cannot be diffused across a team or assigned to the platform itself. The governance framework must make clear who is responsible for each agent and each process.
Application
When an agent at Level 3 executes a multi-step workflow that produces an incorrect regulatory submission, the accountable officer for that process bears responsibility, not the AI, not the platform, and not the technology team. Every agent in the system is assigned to a named human. Audit trails clearly show who was responsible for setting up, monitoring, and maintaining the agent.
PRINCIPLE 03
Consequence Determines Autonomy
The maximum autonomy level of any agent is constrained by the consequence rating of its process, not by technical capability. The fact that an agent could operate at Level 4 does not mean it should. Public health, safety, environmental, and regulatory processes carry consequence ceilings that technology cannot override.
Application
A process rated as "Critical" (such as drinking water treatment decisions or SCADA control) cannot operate above Level 1 regardless of how capable the AI is. The Consequence Rating table in Chapter 5 defines maximum autonomy levels. This constraint is non-negotiable and protects the organisation from the temptation to over-automate high-risk processes.
PRINCIPLE 04
Standardise Before You Automate
AI amplifies what already exists. If a process is inconsistent, undocumented, or dependent on tribal knowledge, automating it produces inconsistent, undocumented automated failures at scale. The OIP methodology requires processes to reach a minimum documentation and standardisation threshold before AI agents are assigned. This is a feature, not a barrier: the documentation effort itself is valuable.
Application
Before a process moves to Level 2 or higher automation, it must have complete documentation per the standards in Chapter 7. Process owners must have standardised the process and removed variation, workarounds, and exceptions. The Silver Report's Chapter 7 explicitly finds that digital reforms fail when centralisation comes before standardisation. This principle is the lesson the Victorian Government has identified from decades of failed reform efforts.
PRINCIPLE 05
Transparency Over Sophistication
An agent that produces an explainable, auditable, traceable output at Level 2 is preferred over an agent that produces a superior output at Level 4 but cannot explain its reasoning. In regulated environments, the ability to demonstrate why a decision was made is often more important than the quality of the decision itself. The OIP prioritises auditability at every level.
Application
Agents must be able to provide clear reasoning for every output. If a process cannot be documented simply enough for a human to understand, it is not ready for automation. All agent interactions are logged with full input and output records for Freedom of Information compliance and audit purposes.
PRINCIPLE 06
Regulatory Alignment by Design
The OIP is designed for deployment in Victorian Government and regulated critical infrastructure environments. Compliance with the Victorian Protective Data Security Standards (VPDSS), the Privacy and Data Protection Act 2014 (Vic), the Australian Energy Sector Cyber Security Framework (AESCSF), and sector-specific legislation is not an afterthought but a design constraint that shapes architecture, data handling, access controls, and operational procedures from the outset.
Application
Process documentation references relevant regulatory requirements. Privacy, data sovereignty, transparency, and fairness requirements shape process design before implementation begins. Every design choice, every autonomy level, every data flow is evaluated against the regulatory frameworks detailed in Chapter 11.
PRINCIPLE 07
Incremental Value Over Transformational Promise
The OIP delivers value incrementally. A completed Process Register is valuable. A documented Knowledge System is valuable. A single well-governed Level 1 agent that saves 30 minutes per day is valuable. The framework resists the temptation to defer all value to a future state where everything is connected and autonomous. Each phase delivers measurable, standalone benefits.
Application
Implementation follows phases 0-3 sequentially. Organisations do not skip to Level 4 agents or attempt enterprise-wide transformation in Phase 1. Each phase delivers standalone value that justifies the investment regardless of whether subsequent phases proceed. By the end of Phase 1, the organisation has a complete Process Register and beginning of Knowledge System with initial measurement capability.
PRINCIPLE 08
Technology Serves the Organisation, Not the Other Way Around
The platform architecture abstracts the AI model layer so that the underlying technology can change without rebuilding the platform. What the organisation owns is its process knowledge, its governance framework, and its readiness assessment. Technology decisions, including which AI models to use and how infrastructure is shared, will evolve as the sector matures and as the Silver Report's recommendations for shared digital platforms take shape. The OIP is designed to work with whatever technology environment the organisation operates in.
Application
Agents are defined by their System Prompt and capability requirements, not by a specific LLM provider or infrastructure choice. Procurement decisions are separated from agent design. The Process Register, Knowledge System, and governance framework are organisational assets that survive any technology platform change. When Digital Government Services (DGS) leads shared platform decisions across the Victorian public sector, Barwon Water's operational intelligence transfers seamlessly to whatever platform is chosen.
AGENT MATURITY

The Agent Maturity Scale

The Agent Maturity Scale defines six levels of AI agent autonomy (0 to 5). This scale is the primary mechanism for governing what agents can and cannot do within the OIP. Each level is defined by its capabilities, its constraints, the governance controls required, and the systems access it entails.

Critical Distinction: Read vs Write
The most consequential boundary in agent autonomy is not between simple and complex, but between read and write. An agent that reads data from a system to inform its output is fundamentally different from one that writes data back to that system. This framework treats the read/write boundary as a primary governance control throughout all six levels.
0
Static Automation
CapabilitiesHardcoded rules, scheduled triggers, deterministic output
System AccessRead and write hardcoded to specific systems. No AI decision-making.
Data FlowTrigger condition met → hardcoded workflow → system update
Human RoleDesigns the rules, monitors outcomes, adjusts hardcoded logic
GovernanceVersion control, audit logging, basic approval for rule changes
ExampleInvoice processing rule: "If invoice from vendor X and amount less than 1000, approve automatically." Deterministic, no AI.
Consequence CeilingAny (consequence is determined by what the rules do, not by AI)
1
AI Assistant
CapabilitiesText generation, analysis, summarisation, document review, recommendation
System AccessNone. Read access to reference documents only. No API access.
Data FlowHuman provides context and question → AI generates response → human uses or discards
Human RoleReviews and validates every output before use. Makes the actual decision.
GovernanceLogging of interactions, clear terms of reference, regular output audits
LLM RequirementsStandard LLM. No special safety constraints needed beyond system prompt.
ExamplePolicy advisor to executive: "Summarise the three main strategic implications of the proposed water pricing framework." Executive reviews and uses.
Consequence CeilingAny (human review before use)
2
Tool-Using Agent
CapabilitiesRead data from multiple systems, synthesise information, make recommendations, call read-only APIs
System AccessBroad read-only. Can query databases, APIs, and reference systems. Cannot write.
Data FlowHuman trigger → agent reads multiple systems → agent synthesises and recommends → human acts
Human RoleInitiates and reviews. Makes final decisions and owns the action.
GovernanceSystem access controls, audit logging, exception reporting, regular output validation
LLM RequirementsStandard LLM with tool use capability. Chain-of-thought reasoning recommended.
ExampleMaintenance scheduling assistant: "Identify all assets due for maintenance in the next 30 days, cross-reference with weather forecasts and crew availability, recommend optimal scheduling." Recommends, human approves.
Consequence CeilingAny (human approval required before action)
3
Workflow Agent
CapabilitiesExecute multi-step workflows, read data, write to systems with human approval per step
System AccessRead + approved write. Writes only to pre-approved systems and fields. All writes subject to HITL approval.
Data FlowTrigger → agent executes steps → agent requests approval for each write → human approves → system updated
Human RoleReviews and approves each consequential action. Maintains veto authority throughout execution.
GovernanceAll Level 2 governance plus: approval workflow for each write, step-by-step audit trail, rollback capability
LLM RequirementsAdvanced LLM with tool use, memory, and explicit approval-seeking behaviour.
ExampleCustomer account creation: Agent collects information, validates compliance, prepares account, requests human approval before creating account in billing system. Human reviews and approves/rejects each step.
Consequence CeilingHigh (all writes require human approval)
4
Supervisory Agent
CapabilitiesAutonomous decision-making within bounded domains, continuous process monitoring, self-initiated corrective actions
System AccessRead + bounded autonomous write. Writes limited to specific fields/processes. Hard constraints at infrastructure level.
Data FlowContinuous monitoring → anomaly detected → agent decides and acts within bounds → human reviews retroactively
Human RoleDefines operating parameters and decision boundaries. Reviews agent actions in batches. Maintains authority to override.
GovernanceAll Level 3 governance plus: formal decision rules, hard constraints at infrastructure level, retroactive exception reporting, monthly effectiveness reviews
LLM RequirementsAdvanced LLM with explicit constraint adherence, memory of decisions, anomaly detection capability.
ExampleNetwork pressure monitoring: Agent continuously monitors SCADA feeds, detects anomalies, initiates alerts and manual isolation protocols. Cannot act on critical systems without human intervention. Human reviews all alerts weekly.
Consequence CeilingMedium (bounded autonomy within hard constraints)
5
Autonomous System
CapabilitiesGoal decomposition, multi-agent coordination, cross-system optimisation, long-term planning
System AccessBroad read and write. System-enforced boundaries at infrastructure level.
Data FlowStrategic objectives set → agent decomposes into tactical goals → agent executes and adapts → human monitors system health
Human RoleSets objectives and boundaries. Reviews outcomes and system health. Monitors the systems that monitor the agents. The human role changes shape at this level but does not diminish. Oversight becomes more sophisticated, not less.
GovernanceAll Level 4 governance plus: formal safety case, independent review, regulatory approval where applicable, real-time behavioural monitoring, automatic shutdown triggers, continuous learning audits
RecommendationNOT RECOMMENDED for Victorian Government or critical infrastructure in near term. Will be reviewed annually as capability matures.
A Note on Level 5 Oversight
It is a common misconception that higher autonomy means less oversight. The opposite is true. At Level 3, a human reviews each consequential action. At Level 5, the human monitors the system that monitors the agent, reviews behavioural patterns across thousands of transactions, and maintains authority to intervene at any point. The monitoring infrastructure at Level 5 is more sophisticated than at Level 3, not less. The investment in oversight is greater, not smaller.

Understanding the Boundaries Between Levels

The distinctions between levels are more than academic. They determine what governance structures are needed, what training is required, and what risks must be managed. Understanding the key boundaries helps you classify processes correctly:

The Level 0-1 Boundary: AI vs. Automation

Level 0 is not AI at all. It is traditional hardcoded automation. If you have a rule like "If invoice is less than $1000 from a trusted vendor, approve automatically", that is Level 0. No AI is involved. The rule was written by a human and applies deterministically. Level 1 is different. A human asks a question, the AI generates an answer, the human uses or discards it. AI is involved but in a purely advisory capacity.

The Level 1-2 Boundary: System Access

This is a critical boundary. Level 1 agents cannot access external systems. They are chatbots. They can only read information provided by a human in the conversation. Level 2 agents can read from external systems (databases, APIs, reference documents). This allows them to gather information, synthesise it, and make recommendations. But they still cannot write. They cannot change anything. This is a major governance difference because a Level 2 agent that makes a wrong recommendation can be caught and corrected before any system is updated.

The Level 2-3 Boundary: Write Authority

Level 2 agents cannot write to systems. Level 3 agents can write, but only with human approval per step. This is a major escalation. You are now allowing the AI to make changes to organisational systems, albeit with human approval before each change. The consequence ceiling drops from "any" to "high" because write failures are harder to undo than recommendation failures.

The Level 3-4 Boundary: Autonomous Action

Level 3 agents require human approval before taking consequential action. Level 4 agents can take bounded autonomous action without per-transaction approval. A human has set the boundaries, but the agent operates within them without asking. This requires confidence in the process model, the agent's understanding of the boundaries, and the infrastructure to monitor that the agent stays within bounds. Consequence ceiling drops to "medium" because autonomous failures can have broader impact before they are detected.

The Level 4-5 Boundary: Strategic Direction

This is the boundary between bounded autonomy and open-ended autonomy. At Level 4, the boundaries are set by humans and are narrow. At Level 5, the agent decomposes its own goals and adapts its strategy within very broad parameters. This level is not recommended for government or critical infrastructure and is unlikely to be deployed in this framework for years to come.

Practical Examples of Level Progression

Consider a process like "Approve Small Capital Expenditure Requests". Here is how it might progress through levels as the organisation gains confidence:

  • Today (Pre-OIP): Request arrives, manager reads it, checks budget, makes a decision, approves or rejects. Takes 30 minutes.
  • Phase 1 (Level 1): Request arrives, agent summarises the request, lists relevant policy, highlights financial impact, manager reads and decides. Takes 10 minutes. Agent does research work.
  • Phase 2 (Level 2): Request arrives, agent reads from finance system, checks available budget, reads policy database, queries historical approvals for similar items, recommends approval or rejection with reasoning, manager reviews and decides. Takes 5 minutes.
  • Phase 3 (Level 3): Request arrives, agent validates completeness, queries systems, prepares approval document, asks manager to review and approve the prepared document. If manager approves, agent submits to finance system. Takes 3 minutes.
  • Never (Level 4+): Not recommended for approval processes. Human authority over spending is too important.

Notice that the process does not automatically graduate to higher levels. Some processes should stay at Level 1 or 2 because human judgment is essential or consequence is high. The OIP is about finding the right level for each process, not about maximising autonomy.

Agent Maturity Summary Table

Level Name System Access Initiator Autonomous Write Max Consequence
0 Static Automation Hardcoded R/W Schedule/Trigger Yes (deterministic) Any
1 AI Assistant None Human only None Any
2 Tool-Using Agent Read-only APIs Human only None Any
3 Workflow Agent Read + Approved Write Human or Trigger No (human-approved per step) High
4 Supervisory Agent Read + Bounded Write Self-initiated within bounds Bounded autonomous Medium
5 Autonomous System Broad R/W (bounded) Self-directed Broad autonomous Not recommended
AGENT MATURITY

Three-Dimensional Assessment Framework

Every process in the OIP is assessed across three independent dimensions. The combination of these three ratings determines the implementation approach, governance requirements, and deployment priority for each process. No single dimension is sufficient on its own. A process can be technically ready but strategically low priority. It can be strategically important but not yet ready. The framework accounts for all combinations.

Dimension 1: Target Autonomy Level (0 to 5)

The target level defines what the agent should be capable of when fully implemented. Determined by process characteristics:

  • Is the process deterministic or does it require judgement? If deterministic, consider Level 0-2. If judgement is required, Level 3+.
  • Does the process require data from multiple systems? If yes, Level 2 minimum.
  • Does the process involve writing to systems or making commitments on behalf of the organisation? If yes, Level 3 minimum.
  • Would the process benefit from continuous autonomous monitoring and corrective action? If yes, Level 4.
  • Does the process require strategic goal decomposition across multiple domains? Extremely rare in current environment. Level 5 not recommended.

Dimension 2: Consequence Rating

Consequence rating defines the potential impact if an agent produces an incorrect or unintended output. This ceiling constrains maximum autonomy regardless of technical capability:

Rating Definition Examples Maximum Autonomy
Low Error caught in normal workflow. Minor rework. No impact on external stakeholders. Internal reports, meeting notes, draft correspondence, policy analysis Level 5
Medium Significant rework required. May affect service delivery or internal processes. Recoverable. Budget analysis, project reporting, maintenance scheduling, staff communications Level 4
High Could affect external stakeholders or regulatory compliance. Would require escalation to resolve. Customer billing, regulatory submissions, water quality reporting, environmental notifications Level 3
Critical Could endanger public health, safety, or critical infrastructure integrity. Non-recoverable without major intervention. Drinking water treatment decisions, SCADA/OT control, dam safety operations, chemical dosing, fire suppression system activation Level 1 only

Dimension 3: Readiness Score

Readiness score assesses how prepared a process is for AI augmentation today. Scored 1 to 5:

Score Assessment Key Indicators Action Required
1: Not Ready Process is undocumented, inconsistent, and dependent on tribal knowledge. No system APIs available. Process varies significantly between executions. No documented standards. Key staff cannot articulate decision rules. Multiple manual workarounds. Not suitable for AI in any form. Focus on process standardisation and documentation first.
2: Minimal Readiness Process is partially standardised. Some documentation exists. Data exists but APIs not yet available. Basic process documentation exists. Some variations remain. Decision rules understood but not fully documented. Workarounds present. Begin with Level 1 assistance only. Complete process documentation before considering higher levels.
3: Moderate Readiness Process is standardised with good documentation. System access available for read operations. Data quality is acceptable. Complete process documentation. Consistent execution. Decision rules documented. System APIs available for read. Data quality assessed and acceptable. Suitable for Level 2 or Level 3 agents depending on consequence rating. Move to implementation planning.
4: High Readiness Process is fully standardised with complete documentation. Read and write APIs available. Data quality excellent. Governance framework in place. Comprehensive documentation including decision trees and exception handling. All APIs available. Data governance framework established. Staff trained on standards. Suitable for Level 3 or Level 4 agents. Proceed directly to implementation.
5: Fully Ready Process is optimised, documented, with embedded quality controls and monitoring. Full system integration established. All Level 4 features plus: automated quality checks, real-time monitoring infrastructure, continuous improvement mechanism, no remaining variations. Suitable for Level 4 or Level 5 agents. Proceed to implementation with advanced monitoring framework.

Priority Matrix: Combining All Three Dimensions

The combination of Target Autonomy, Consequence Rating, and Readiness Score produces a priority matrix that guides implementation sequencing:

Readiness High Target Autonomy (3+) + Low Consequence Low Target Autonomy (0-2) or Medium Consequence Critical Consequence
Fully Ready (4-5) Immediate: Deploy Level 3+ agent. High value, low risk. Immediate: Deploy Level 1-3 agent. Foundation capability. Immediate: Deploy Level 1 assistant only. Augmentation not autonomy.
Moderately Ready (3) Near-Term: Complete remaining readiness work, deploy Level 2-3 agent in 2-4 weeks. Near-Term: Complete documentation, deploy Level 1-2 agent in 1-3 weeks. Medium-Term: Complete exception documentation, deploy Level 1 only after review.
Minimally Ready (2) Medium-Term: Significant standardisation work required. Plan 2-3 months before implementation. Medium-Term: Complete documentation and standardisation. Plan 4-8 weeks. Deferred: Complete Level 4 documentation standards before any agent consideration.
Not Ready (1) Deferred: This process is not suitable for AI without major standardisation effort. Advisory: AI is not recommended until process is documented and standardised. Do Not Automate: Focus entirely on process standardisation. Automation not appropriate.

Using the Priority Matrix: Practical Examples

To illustrate how the three-dimensional framework actually works in practice, here are examples of how different processes from a water utility would be prioritised:

Example 1: "Generate Monthly Customer Bills" Process

  • Target Autonomy: Level 2 (read customer data, consumption data, apply billing rules, generate bill document)
  • Consequence Rating: High (incorrect bills affect customer revenue, trust, and regulatory compliance with ESC)
  • Readiness Score: 4 (process is well-documented, billing system APIs are mature, data quality is high)
  • Priority Classification: IMMEDIATE (fully ready, high business value, manageable consequence through Level 2 governance)
  • Autonomy Ceiling: Level 2 only (consequence rating prevents Level 3+ even though process is ready)
  • Implementation Timeline: Week 10-12 (design and build Level 2 agent, test with 100 sample bills, deploy with human approval per bill)

Example 2: "Respond to Main Break Report" Process

  • Target Autonomy: Level 1 (summarise break information, classify urgency, draft initial response)
  • Consequence Rating: Critical (main breaks threaten water service to customers and require immediate dispatch)
  • Readiness Score: 2 (process varies by situation, decision rules not fully documented, some staff informally decide priority)
  • Priority Classification: MEDIUM-TERM (despite critical consequence, readiness is low. Requires documentation work first.)
  • Autonomy Ceiling: Level 1 only (consequence prevents Level 2+. Agent can only assist, not decide)
  • Implementation Timeline: Week 25-28 (first, conduct process standardisation work to document how priority actually gets decided. Then build Level 1 assistant agent to draft response summaries)

Example 3: "Conduct Employee Performance Review" Process

  • Target Autonomy: Level 1 (gather performance data, draft talking points for manager)
  • Consequence Rating: Medium (affects employee development and compensation decisions)
  • Readiness Score: 3 (process is somewhat standardised, but depends heavily on manager judgment and relationship)
  • Priority Classification: NEAR-TERM (ready for Level 1 implementation, low complexity, valuable to managers)
  • Autonomy Ceiling: Level 1 only (this is fundamentally a human decision process. AI assists, never decides)
  • Implementation Timeline: Week 8-12 (build simple Level 1 agent to gather recent feedback comments, performance metrics, and prepare draft talking points for manager)

Example 4: "Update Network Asset Register" Process

  • Target Autonomy: Level 0 (rule-based trigger: when field crew completes work order, automatically update asset register with new pipe details)
  • Consequence Rating: Medium (incorrect asset data affects maintenance planning and compliance)
  • Readiness Score: 5 (process is fully automated, APIs are mature, data flows are clear)
  • Priority Classification: IMMEDIATE (fully ready, low risk, deterministic update logic)
  • Autonomy Ceiling: Level 0 (deterministic rule-based automation, not AI-assisted)
  • Implementation Timeline: Week 6-8 (design and implement hardcoded automation rule, not an AI agent)

Re-Assessment Over Time

A process does not stay in the same priority classification forever. As the organisation improves:

  • Readiness increases: As a process is documented and standardised, readiness score increases from 2 to 3 to 4, which may move a process from "Medium-Term Deferred" to "Immediate".
  • Target autonomy may increase: As a process stabilises and agents prove themselves, target autonomy may increase from Level 1 to Level 2, enabling more automation.
  • Consequence ceiling may change: Regulatory changes or business changes may alter consequence rating. A change in water quality regulations could move a process from "Medium consequence" to "Critical".
  • Business priorities shift: A process that was "Medium-Term Deferred" may become "Immediate" if business circumstances change (staff shortage in that department, customer complaints increase, etc.)

The governance committee should conduct a re-assessment of the priority matrix annually or when significant business changes occur. This ensures the OIP roadmap remains aligned with organisational needs.

PROCESS ARCHITECTURE

Process Taxonomy Architecture

The Process Register is organised using a consistent taxonomy that allows every process in the organisation to be uniquely identified, classified, and retrieved. The taxonomy serves four functions: it creates a complete inventory, it enables consistent governance, it allows filtering and reporting, and it provides a foundation for AI agent assignment.

Hierarchy: Division, Department, Process Group, Process

Every process fits into this four-level structure:

Level Definition Example Number
Division Organisational major line of business. Usually corresponds to executive portfolio or major functional area. Operations, Commercial, Corporate Services, Infrastructure 5-8 per organisation
Department Sub-division within a major line of business. Usually reports to a manager. Network Operations, Customer Service, Finance, Human Resources 30-60 across organisation
Process Group Cluster of related processes within a department. May span multiple teams. Water Distribution, Customer Billing, Budget Management, Recruitment 150-300 across organisation
Process Discrete, repeatable unit of work that produces a defined output. The atomic unit of the OIP. Emergency Main Repair, Generate Monthly Bill, Approve Capital Expenditure, Conduct Interviews 800-1500 across organisation

Naming Conventions

Consistent naming is essential for retrieval, governance, and communication. Follow these rules:

  • Process names use active verbs in title case: "Approve Purchase Requisition", "Generate Water Quality Report", "Respond to Billing Inquiry"
  • Do not use technical terms that would be meaningless to non-technical staff: use "Handle Leak Reports" not "Parse GIS Coordinates"
  • Do not use acronyms in process names: use "Wastewater Treatment" not "WWT Process"
  • Be specific enough to distinguish from similar processes: "Approve Small Capital Projects" and "Approve Major Capital Projects" are distinct
  • Keep names to 5-7 words maximum: brevity aids retrieval and communication

Granularity: The Right Level of Detail

Processes should be granular enough to be independently documented and assigned to agents, but not so granular that they become micro-tasks. Guidelines:

  • A process takes 15 minutes to several hours to complete
  • A process produces a clear, defined output that others can use
  • A process can be understood by a person unfamiliar with the organisation in 2-3 paragraphs
  • A process has a clear start trigger and end condition
  • A process can be performed by a single team or person (though may involve consultation)

Enterprise Processes: Cross-Cutting Functions

Some processes cut across multiple departments and divisions. These are classified as Enterprise Processes and governed separately:

  • Enterprise Finance: Budget approval, financial reporting, compliance, audit support
  • Enterprise Human Resources: Recruitment, performance management, separation processes
  • Enterprise Governance: Board reporting, executive briefing, regulatory notification
  • Enterprise Technology: System access requests, incident escalation, infrastructure changes

Enterprise processes have an explicit owner (usually at director level) and are managed through a coordinated governance committee rather than individually by departments.

Examples of Process Taxonomy in Practice

To illustrate how the taxonomy works, here are some examples of how Barwon Water's processes might be classified:

Division Department Process Group Process
Operations Network Operations Main Repairs Respond to Main Break Report
Operations Network Operations Main Repairs Prioritise Emergency Repairs
Operations Network Operations Main Repairs Complete Main Repair Work
Operations Water Treatment Water Quality Monitoring Analyse Daily Quality Results
Commercial Customer Service Inquiry Management Respond to Billing Inquiry
Commercial Customer Service Complaint Management Investigate Water Quality Complaint
Commercial Billing Customer Billing Generate Monthly Customer Bills
Corporate Services Finance Budget Management Approve Purchase Requisition
Corporate Services Human Resources Recruitment Screen Job Applications

Taxonomy Maintenance and Governance

The Process Register is not static. As the organisation evolves, processes change, merge, or are created. The governance committee is responsible for maintaining the taxonomy. When a new process is identified, it goes through a brief assessment:

  • What division and department owns this process?
  • What process group does it belong to?
  • Has this process been documented before? (Avoid duplication)
  • Is this genuinely a new process or a variation of an existing process?
  • If this process is created, what other processes does it affect?

New processes are added to the Process Register in the next documentation cycle. Existing processes that are no longer performed are marked as archived rather than deleted, so historical records are preserved.

The Value of a Complete Taxonomy

A complete, well-maintained taxonomy provides benefits beyond AI implementation. It allows the organisation to:

  • Understand where effort is being spent (which departments have the most processes?)
  • Identify process duplication (are similar processes being performed in different departments?)
  • Support change management (when a system is implemented, which processes are affected?)
  • Manage consolidation (which processes are unique to Barwon Water and which are common across the partnership?)
  • Allocate resources (where should we focus documentation and AI development effort?)
  • Train new staff (what are all the processes this new person needs to understand?)
PROCESS ARCHITECTURE

Documentation Standards

Documentation is the foundation of the OIP. The quality of documentation directly determines what agents can do and what oversight is required. The framework uses three documentation levels that correspond to the target autonomy and consequence rating of the process.

Level 1 Documentation: AI Assistant and Tool-Using Agents (Autonomy 0-2)

For processes where agents read information and make recommendations (not taking autonomous action), documentation includes:

Field Description
Process Name Following naming conventions. Unique within department.
Owner Named individual accountable for this process.
Purpose One sentence describing what this process achieves.
Scope What is included and explicitly excluded. Boundaries with related processes.
Trigger Condition What event or condition starts this process?
Primary Steps 4-8 major steps in sequence. Each step includes the decision, action, and output.
Decision Rules For each decision point: What is the decision? What information informs it? What are the options?
Output What does this process produce? Format, content, recipient.
Systems Used What systems are touched (read or write)?
Frequency How often does this process run? Per transaction, daily, weekly, on-demand?

Level 2 Documentation: Workflow Agents (Autonomy 3)

For processes where agents execute workflows with human approval at each consequential step, add to Level 1:

Additional Field Description
Detailed Steps with Substeps Each primary step broken into 2-4 substeps. Each substep lists input, action, and validation.
Decision Tree For each decision: If [condition], then [action]. All conditions must be mutually exclusive and exhaustive.
Exception Handling For each substep: What could go wrong? What is the recovery procedure?
Data Validation Rules For each input: What constitutes valid input? What format? What is rejected?
Approval Points At each step where agent requests approval: Who approves? What do they check? What is the escalation path if they reject?
Audit Requirements What information must be logged for compliance and audit purposes?
Performance Standards What is acceptable performance? Response time, accuracy, completeness.
Dependencies What other processes must this process wait for? What feeds into and out of this process?

Level 3 Documentation: Supervisory Agents (Autonomy 4+)

For processes where agents take autonomous action within bounded domains, add to Level 2:

Additional Field Description
Autonomous Decision Boundary What decisions can the agent make independently? What must escalate to human?
Hard Constraints What are the absolute boundaries the agent cannot cross, even if it would optimise the process?
Monitoring Strategy What is monitored? How frequently? What metrics indicate the agent is operating normally?
Anomaly Definition What constitutes abnormal behaviour? How does the system detect it?
Escalation Triggers Under what conditions does the agent escalate to a human? What is the escalation path?
Continuous Learning How does the agent improve over time? What feedback mechanism exists?
Shutdown Procedure If the agent malfunctions, what is the procedure to stop it? Who can initiate shutdown?
Recovery Procedure If the agent takes an incorrect action, what is the recovery procedure?
Regulatory Reference What regulations or policy constraints apply to this process? How does the agent ensure compliance?

System Prompt Template Structure

System Prompt: The Primary Agent Configuration Mechanism
The system prompt is a structured instruction set that defines an agent's role, context, constraints, and expected behaviour. System prompts are generated from the Level 1-3 documentation and refined through pilot testing. The template on the last chapter provides the required sections. Do not deviate from this structure unless explicitly approved by the OIP Governance Committee.

Documentation Quality Assurance

Documentation quality directly affects agent quality. Poor documentation leads to agents that make mistakes. The quality assurance process ensures documentation is complete and accurate before agents are built from it:

Quality Checks for Level 1 Documentation

  • Completeness: Are all required fields filled? Does the documentation answer the basic questions: What does this process do? Who does it? When does it happen? What are the inputs and outputs?
  • Clarity: Could someone unfamiliar with the organisation understand this process from reading the documentation? Are technical jargon and acronyms explained?
  • Accuracy: Does the documentation reflect how the process actually works, not how it is supposed to work? Interview 2-3 people who actually perform the process to verify.
  • Sufficiency: Is there enough detail for a new employee to perform the process? Not so much detail that it buries the essential information.

Quality Checks for Level 2-3 Documentation

All Level 1 checks plus:

  • Decision Completeness: For each decision, is every possible outcome documented? If "If X then do Y, else do Z", are X and Z exhaustive (cover all cases) and mutually exclusive (no overlap)?
  • Exception Coverage: Does the documentation address likely exceptions and errors? What happens if a customer calls while their account is being created? What if data is missing? What if a system is unavailable?
  • Data Mapping: For each system integration point, is it clear what data goes in and what comes back? Are API field names mapped to business terminology?
  • Regulatory Alignment: Are relevant regulations cited? Does the process comply with all applicable rules?

Documentation Review Process

For Level 1: Process owner and one colleague who performs the process review together. They sign off on accuracy. Technology team does a brief check for completeness. Takes about 2 hours per process to document and review.

For Level 2-3: Process owner, subject matter expert, technology team, and compliance/legal (if process has regulatory components) review together. Multi-person session to discuss edge cases and refine decision rules. Takes 6-8 hours per process to document and review thoroughly.

Documentation as Living Document

Documentation does not stay static. Processes change, regulations change, better approaches are discovered. Documentation must be maintained:

  • Minor updates: Process owner can make updates (typo corrections, small clarifications) without formal approval. Documented in version control with timestamp and reason.
  • Significant updates: Changes to decision rules, adding new steps, or changing exception handling require re-review and approval before the updated documentation is published.
  • Agent impact assessment: If documentation changes significantly, assess whether operating agents need system prompt updates. A change in decision rules may require retraining agents or system prompt modifications.
TECHNOLOGY

Technology Positioning

The OIP is deliberately technology-agnostic. This means the organisation owns its process knowledge, governance framework, and assessment methodology. Technology choices, including which AI models to use, where compute runs, and how data flows, are separate decisions that can change over time as the sector matures and the Silver Report's recommendations take effect.

Silver Report Context: Shared Digital Platforms

The Silver Report (Chapter 7) explicitly recommends that Digital Government Services (DGS) lead the development of shared digital platforms across the Victorian public sector. For water corporations consolidating into regional partnerships, this means individual entity technology decisions will be reviewed and may be superseded by partnership-level or state-wide platform decisions.

The OIP is designed to transfer to whatever technology environment the partnership and DGS choose. The Process Register, Knowledge System, and governance framework are organisational assets that survive any technology platform change.

What the Organisation Controls vs. What the Platform Provides

Organisational Control
Process documentation, knowledge capture, decision rules, governance framework, agent specifications, readiness assessment, approval processes, oversight mechanisms, audit requirements, capability roadmap
Platform Responsibility
LLM selection, compute infrastructure, data storage, API layer, version control, user interface, access controls, logging and audit systems, integration with existing systems
Shared Control
Agent testing methodology, performance benchmarking, exception handling procedures, escalation mechanisms, compliance validation, continuous improvement process
Regulatory Alignment
Data sovereignty, privacy compliance, transparency requirements, audit trail requirements, accountability mechanisms, FOI compliance, sector-specific regulatory requirements

Deployment Options: Where AI Actually Runs

The OIP supports multiple deployment models. The choice depends on data classification, regulatory requirements, partnership decisions, and DGS platform recommendations. Each option is assessed for compliance and capability:

Option A: Zero Data Retention (ZDR)
AI runs in cloud environment (e.g., DGS-provided platform). Data is queried from organisational systems via API. Processed data does not persist in the cloud. Suitable for non-sensitive analysis and routine processes. Advantages: scalable, managed by DGS, no data residency issues. Constraints: requires robust API layer, real-time data access dependency.
Option B: Bedrock/Vertex AI in Managed VPC
AWS Bedrock or Google Vertex AI running in a dedicated Virtual Private Cloud with isolated data. Data remains within Australian region. Suitable for sensitive operational data, customer information. Advantages: regional compliance, AWS/Google managed updates, scalable. Constraints: higher cost, requires AWS/Google infrastructure commitment, limited to those cloud providers.
Option C: Azure OpenAI with Australian Data Residency
OpenAI models running on Azure infrastructure located in Australia. All data remains in Australian region. Highest compliance with VPDSS. Suitable for all data classifications including critical. Advantages: complete Australian data residency, OpenAI model capability, native integration with Office 365. Constraints: cost, Azure infrastructure required.
Option D: On-Premises or Hybrid Deployment
AI models run on organisation-controlled infrastructure. Maximum data sovereignty and control. Suitable for extremely sensitive or proprietary processes. Advantages: complete control, no external data transmission, maximum security. Constraints: significant infrastructure investment, requires deep technical expertise, slower feature updates, higher maintenance burden.

Architecture Principles

Regardless of which deployment option is chosen, the architecture adheres to these principles:

  • Layered Independence: The Process Register, Knowledge System, and Governance Framework are independent of the AI platform. They could be implemented with different technology entirely.
  • API-Driven Integration: Agents interact with organisational systems through well-defined APIs. This allows system changes without agent retraining.
  • Auditability First: Every input, decision, and output is logged in a form suitable for audit and compliance. Logging infrastructure is as important as the AI itself.
  • Human Authority Preserved: No matter what level of autonomy an agent operates at, humans maintain authority to intervene, override, or shut down.
  • Graceful Degradation: If the AI platform fails, can the organisation still operate? The answer should be yes. AI augments human capability; it does not replace it.
The Copilot Question: LLM Model Selection
Should we use OpenAI GPT-4, Claude Opus, Gemini, or another model? The OIP does not prescribe a specific LLM. The choice depends on cost, capability, data residency, and partnership platform decisions. What matters is that the process documentation and system prompts are independent of the choice. If the partnership decides to move from one LLM to another, documentation does not change. Only the system prompts are adapted to the new model's characteristics.

Data Sovereignty and Security Considerations

For a water utility managing critical infrastructure, data sovereignty is non-negotiable. The OIP assumes all data about infrastructure, operations, customer information, and regulatory matters must remain under Australian control or the organisation's direct control.

For each process being automated, the technology team and process owner must jointly assess: What data does this agent touch? Where is that data classified under VPDSS? What deployment option is appropriate? Does this process require Australian residency? The answer to the last question determines which deployment options are available.

Technical Architecture: System Components

The OIP platform, regardless of deployment option chosen, includes these core technical components:

Process Register Database

This is a standard database (could be SQL Server, PostgreSQL, or cloud database) that stores the complete catalogue of organisation processes. Fields include: Process Name, Division, Department, Process Group, Owner, Description, Target Autonomy, Consequence Rating, Readiness Score, Current Status (Ready/In Development/Pilot/Production/Archived), Date Created, Last Updated. The database is read-accessible to all staff (they need to be able to find processes). It is write-accessible to governance committee and program team only.

Knowledge System Repository

This is a document repository (could be SharePoint, Confluence, or custom system) that stores the detailed documentation for each process. For each process: Level 1-3 documentation, decision trees, exception handling procedures, system prompt template, data mapping, regulatory references. The repository is searchable, versioned, and audit-logged. Process owners can update documentation; governance committee reviews and approves updates.

Agent Runtime Environment

This is where agents actually execute. Depending on deployment option, this could be Azure OpenAI, AWS Bedrock in a private VPC, Google Vertex AI, or on-premises infrastructure. The runtime has APIs for: receiving requests from users or triggers, accessing external systems to read/write data, logging interactions and decisions, storing agent state and memory, alerting on anomalies.

Integration APIs

Agents do not access organisational systems directly. Instead, they use well-defined APIs that are hosted on an integration layer. These APIs enforce access controls (an agent can only read/write what it has been authorised to access), rate limiting (prevent agents from overwhelming systems), data transformations (convert between agent request format and system API format), and comprehensive logging. API management is critical for security and auditability.

Audit and Logging System

Every interaction with the OIP is logged: user login, process register access, documentation updates, agent invocation, agent outputs, approvals, incidents. Logs are immutable and archived for compliance and audit purposes. Logs are not accessible to agents themselves (agents cannot cover their tracks). Logs are accessible to compliance, governance committee, and incident investigation teams.

Integration Patterns: How Agents Connect to Systems

Agents do not directly access database systems or operational applications. Instead, they use defined integration patterns:

Read-Only Integration

Agent can query information from systems but cannot modify. Used for Level 1-2 agents that provide recommendations. Example: "Query billing database to show customer account history, but agent cannot create new accounts or change account details."

Write-Through-Approval Integration

Agent can prepare a write operation and request human approval before it is submitted to the system. Used for Level 3 workflow agents. Example: "Agent prepares customer account creation, human reviews and approves, then agent submits to actual system."

Bounded Write Integration

Agent can write to systems within strict bounds enforced at the API layer. Example: "Agent can update the 'Status' field of a repair work order, but cannot modify 'Cost' field. Agent can set status to 'Complete' but not to 'Approved' (only supervisors can do that)."

Event-Triggered Integration

External systems notify the agent when something happens. Agent responds automatically. Example: "When a new customer complaint arrives in the system, the event triggers the complaint response agent to create a response draft."

Security Architecture Principles

The OIP security architecture follows these principles to protect data, infrastructure, and public trust:

  • Defense in Depth: Security controls exist at multiple layers. If one layer fails, others are still in place. Network security, database security, API security, and agent-level constraints are all needed.
  • Least Privilege: Every agent is granted only the minimum data access and system access required to perform its function. An agent that needs to read customer names does not need access to customer financial information.
  • Audit Everything: Every action is logged in a form suitable for audit and compliance. Logs cannot be modified after creation. Logs are accessible to authorised personnel only.
  • Fail Safely: If the agent fails or malfunction is detected, the system defaults to manual processing. Safety is the default, not an afterthought.
  • Transparent Operations: The organisation and regulators can see how agents are operating. System prompts are documented. Decision criteria are explicit. No hidden decision-making.

Testing and Deployment Process

Before an agent goes live, it must pass rigorous testing: Unit testing (does the agent correctly execute its designed logic?), Integration testing (does the agent correctly interact with systems it connects to?), Regression testing (does deploying this agent break any existing functionality?), User acceptance testing (do actual users find the agent useful and correct?), Security testing (can the agent be exploited? Does it properly enforce access controls?)

After testing completes, agent is deployed to production with: gradual rollout (start with 10% of transactions, then 50%, then 100%), real-time monitoring (alert if exception rate or error rate exceeds thresholds), rollback capability (can quickly revert to previous agent version if problems emerge), post-deployment review (week 1, week 4, month 3 check-ins).

Technology Platform Requirements

The OIP platform, regardless of deployment option, must provide these technical capabilities:

Capability Requirement Why It Matters
API Gateway Acts as intermediary between agents and organisational systems. Enforces authentication, authorization, rate limiting, request transformation. Security. Agents never have direct access to systems. API gateway enforces least-privilege access and prevents abuse.
Logging and Audit Trail Every interaction is logged: user, agent, input, output, decision, timestamp, user_id who reviewed/approved. Compliance. Organisations must demonstrate how decisions were made, for FOI, audit, and incident investigation.
Access Control Role-based access control (RBAC) determines who can use which agents, who can approve agents, who can modify documentation. Governance. Different roles have different authority. Role-based controls prevent unauthorized actions.
Monitoring and Alerting Real-time dashboards show agent performance. Alerts trigger when error rate exceeds threshold, exception rate spikes, agent response time degrades. Early problem detection. Anomalies are caught quickly before they affect many transactions.
Version Control All system prompts, process documentation, and agent configurations are versioned. Previous versions are retrievable. Auditability and rollback. If an agent malfunctions after a recent change, previous version can be restored.
Data Sovereignty Depending on deployment option chosen, data may stay within organisation (on-premises), stay in Australia (Bedrock/Vertex/Azure), or be processed in cloud with zero retention. Regulatory compliance. Different processes have different data residency requirements.
IMPLEMENTATION

Implementation Guide

The OIP is implemented in four sequential phases. Each phase delivers standalone value and prepares the organisation for the next phase. Implementation does not require external consultants, external AI platforms, or large upfront investment. It is designed to be delivered by the organisation itself using existing staff and incremental resources.

Before You Start: Pre-Implementation Checklist

Before Phase 0 begins, confirm that the following conditions are met. If any are missing, delay implementation until they are in place:

  • Executive Sponsorship: Does the General Manager or Executive Sponsor publicly support the OIP? Have they committed to attending governance committee meetings and using the platform themselves? If the top leader is not visibly committed, the rest of the organisation will not take it seriously.
  • Governance Committee Identified: Are the key people who should sit on the governance committee identified? Have they been approached and confirmed their willingness to participate monthly? Without committed governance leadership, decisions will stall.
  • Budget Allocated: Is there budget for the program? At minimum: one full-time OIP Program Lead, 0.5 FTE documentation support per department for documentation effort, technology team support for system integration. Without budget, the program will fail when other urgent work demands attention.
  • Process Owner Engagement: Have key process owners been consulted? Do they understand what is being asked of them? Do they see the value? Without process owner buy-in, documentation will be poor quality and agents will not be trusted.
  • Technology Infrastructure Plan: Has the CIO confirmed what deployment option will be used (Zero Data Retention, Bedrock/Vertex, Azure OpenAI, On-premises)? Have API requirements been assessed? Without clarity on technology direction, architecture decisions will be delayed.
  • Consolidated Calendar: Governance committee has committed regular meeting dates for the next 12 months. Key stakeholders have blocked time for Phase 0 training and Phase 1 documentation. People are ready to engage.

Phase 0: Foundation (Weeks 1-6)

Objective: Establish governance, train teams, create assessment framework, begin process inventory.

Activities: Appoint OIP Governance Committee. Conduct governance training. Document existing processes for 2-3 pilot departments. Establish baseline measurement. Create assessment templates.

Deliverable: Governance committee active, 50-100 processes documented, framework established.

Standalone Value: The organisation now has a clear picture of what two departments actually do. This intelligence is valuable regardless of whether AI agents are ever deployed.

Phase 1: Process Register (Weeks 7-20)

Objective: Complete Process Register for all divisions. Establish Knowledge System structure. Deploy first Level 1 agents.

Activities: Document all processes across all divisions using Level 1 standard. Hold interviews with process owners. Create process register database. Assess readiness of all processes. Begin knowledge documentation for highest-priority processes. Deploy 5-10 Level 1 assistant agents to pilot teams.

Deliverable: Complete Process Register (800-1500 processes). Knowledge System started on 50-100 priority processes. 5-10 Level 1 agents in pilot.

Standalone Value: The organisation now has a definitive map of what it does. This is valuable for consolidation planning, capability review, and change management. Pilot agents provide intelligence assistance to pilot teams.

Phase 2: Knowledge System (Weeks 21-40)

Objective: Complete Knowledge System for 200+ priority processes. Deploy Level 2-3 agents. Establish mature governance.

Activities: Complete Level 2-3 documentation for priority processes. Conduct readiness assessments for all 800-1500 processes. Deploy 15-25 Level 2-3 agents to priority processes. Establish OIP governance committee meetings and oversight procedure. Begin consolidation impact assessment for each process. Measure effectiveness of Phase 1 agents, refine system prompts, adjust governance as needed.

Deliverable: Complete Knowledge System for 200+ processes. 15-25 Level 2-3 agents in operation. Mature governance framework with monthly oversight. Consolidation readiness assessment for all processes.

Standalone Value: The organisation can now demonstrate institutional knowledge to an incoming partnership. Agents are handling workflow and decision support. Measurable time savings and quality improvements.

Phase 3: Advanced Operations (Weeks 41-60)

Objective: Scale to 50+ agents across all levels. Full operational maturity. Continuous improvement loop established.

Activities: Deploy remaining agents. Complete Knowledge System for all processes with agents. Implement continuous learning loop for agent improvement. Establish cross-agent coordination for processes that span departments. Conduct effectiveness review and plan for next generation agents. Transition OIP from implementation project to standard operational capability.

Deliverable: 50+ agents operating across all levels. Complete Knowledge System for all automated processes. Operational governance embedded in normal business rhythm. Continuous improvement mechanisms active.

Standalone Value: The organisation is now operationally intelligent. It understands what it does, why, and can execute with AI assistance. Ready for consolidation with knowledge and capability preserved.

Implementation Success Factors

People: The Single Largest Success Factor

Technology projects fail when people are not prepared or engaged. The OIP implementation requires deep engagement from process owners, frontline staff, and executives. Three mechanisms build this engagement:

  1. Show Value Early: Deploy first agents in Week 6, not Week 40. Let people experience the value. Pilot agents should save time and make work easier.
  2. Co-Design with Process Owners: Process owners co-design the agents that affect their work. They see the AI as a tool they helped create, not something imposed on them.
  3. Visible Leadership Support: Executives must use the platform. The General Manager should use a Level 1 assistant for strategic briefings. Executives should attend governance committee meetings.

Managing Resistance

Resistance to AI implementation is not irrational. Three common forms appear and require different responses:

Form 1: Fear of Job Displacement - "The AI will replace me." Response: Show that agents augment human work, not replace it. The role changes; the person becomes a supervisor of agents rather than an executor of routine tasks. This is a promotion, not elimination. The organisation needs experienced people more than ever to design, oversee, and refine agents.

Form 2: Skepticism About Feasibility - "AI can't really do what you're claiming." Response: Start with trivial pilots. A Level 1 agent that summarises emails is not a big claim. It is believable. Success builds credibility for more ambitious agents.

Form 3: Loss of Autonomy - "The AI will restrict how I do my work." Response: Involve people in designing the agent that affects their work. If they design it, they control it.

Detailed Implementation Roadmap

This section provides a more detailed view of what each phase actually requires. This is the playbook that the program team will execute against.

Phase 0 Detailed Activities (Weeks 1-6)

Governance Setup: Appoint OIP Governance Committee (8-12 people). Define roles, meeting cadence, decision authority. Establish escalation path for disputes. Create OIP charter document stating the program's objectives, principles, and constraints.

Team Training: Train governance committee on AI basics, OIP framework, and their specific roles. Train process documentation team on the assessment framework. Train technology team on system prompt construction and API integration requirements.

Pilot Process Selection: Choose 2-3 pilot processes from high-visibility departments. Criteria: process owner is supportive, process is important to the department, outcome is measurable. Do not choose the most complex process. Choose a process where success will be visible and appreciated.

Process Documentation: Document the 2-3 pilot processes using Level 1 template. Interview process owners, frontline staff, and managers. Create process flow diagrams. Document decision rules explicitly.

Baseline Measurement: Establish current-state metrics for the pilot processes. How much time does each process take? What is the current error rate? What is the cost per transaction? These baselines will show the improvement impact of the agents.

Framework Refinement: Take the generic OIP framework in this document and tailor it to Barwon Water's context. Create assessment template with Barwon-specific examples. Create governance charter with Barwon-specific escalation paths and decision authority.

Deliverable Acceptance: Present Phase 0 deliverables to executive sponsor. Secure sign-off on governance committee charter, framework adaptation, and Phase 1 budget.

Phase 1 Detailed Activities (Weeks 7-20)

Full Process Inventory: Document all 800-1500 processes across the organisation using Level 1 template. Allocate documentation effort across departments. Conduct interviews with process owners in each department. Create full Process Register database. This is the foundation of the entire OIP. Do not skip or defer this work.

Readiness Assessment: Assess all processes against the readiness criteria. Rate each 1-5. Identify which processes are ready for Level 2+ agents, which need documentation work first, which are not suitable for AI. This assessment guides Phase 2 prioritisation.

Consequence Classification: For each process, assess consequence if an agent produces incorrect output. Classify as Low, Medium, High, or Critical. This determines the maximum autonomy level that is allowed.

Autonomy Targeting: For each process, determine the target autonomy level (0-5). Justify based on process characteristics. Create a matrix showing all processes against their target autonomy and consequence rating.

Pilot Agent Development: Based on Phase 0 documentation, develop 5-10 Level 1 assistant agents for the pilot processes. These agents should be simple and straightforward: summarise information, draft responses, list policy implications. These are intelligence assistants, not autonomous agents.

Pilot Agent Testing: Deploy pilot agents to pilot teams in their normal work environment. Measure usage, user satisfaction, time saved. Are users actually using the agents? Are they finding value? Are agents producing useful output? Collect feedback and refine system prompts.

Knowledge System Start: Identify 50-100 priority processes. Begin Level 2-3 documentation. Extract decision rules, policy references, exception handling procedures. This will take significant interview time with subject matter experts.

Measurement Framework: Establish the measurement system. What will you track? How will you track it? Train teams to collect metrics. Establish baseline for each metric. Schedule monthly metric review with governance committee.

Deliverable Completion: By week 20, the complete Process Register is published. Pilot agents are demonstrating value. Phase 2 roadmap is clear.

Phase 2 Detailed Activities (Weeks 21-40)

Knowledge System Completion: Complete Level 2-3 documentation for the 200 highest-priority processes. This documentation is used to develop and train agents. It becomes the reference for how processes should be executed.

Advanced Agent Development: Develop 15-25 Level 2-3 agents. These agents are more sophisticated: they read from multiple systems, synthesise information, make recommendations, or execute workflows with human approval. Each agent is tested with 50-100 sample transactions before being deployed to production.

Agent Deployment and Monitoring: Deploy agents to production. Establish 24/7 monitoring for agent behaviour. Set up exception alerts. Establish on-call support for agent issues. Review agent performance weekly.

Governance Maturation: OIP Governance Committee now meets monthly with full agenda. Review new agent proposals. Review incidents and near-misses. Review performance metrics. Approve phase 3 roadmap.

Consolidation Impact Assessment: For each documented process, complete the "Consolidation Impact" assessment. How would this process be affected by the Western Regional partnership structure? Is it likely to be standardised, centralised, eliminated, or preserved as-is? This assessment becomes a strategic asset for negotiation.

Phase 2 Governance Review: Conduct a lessons-learned session. What worked well? What would we do differently? Update governance procedures based on experience. Confirm that governance is actually catching issues before they become problems.

Phase 3 Detailed Activities (Weeks 41-60)

Remaining Agent Development: Develop the remaining 25-50 agents covering the full range of processes that have been prioritised for AI. These include some Level 1 agents for simple assistants, some Level 2 agents for analysis and recommendations, some Level 3 agents for workflow automation.

Cross-Agent Coordination: Many processes depend on other processes. Implement coordination so that if Process A feeds output to Process B, the agents work together seamlessly. Agent A's output format is compatible with Agent B's expected input.

Continuous Learning Loop: Establish a mechanism for agents to improve over time. Collect feedback from users. Review outputs. Identify patterns in exceptions and near-misses. Update system prompts iteratively. This is not a once-and-done implementation but an ongoing improvement cycle.

Transition to Operations: The OIP transitions from being a "project" to being "how we operate". Process owners own the agents that affect their processes. Technology team handles infrastructure and monitoring. Governance committee provides oversight and approval for new agents and major changes.

Effectiveness Review and Roadmap 2: Conduct a comprehensive review of what has been achieved. Measure time saved across all agents. Measure quality improvements. Measure staff satisfaction. Measure consolidation readiness (percentage of processes with complete documentation). Prepare roadmap for next 12 months.

IMPLEMENTATION

Governance and Oversight

Every agent is governed. This section defines the governance committee, approval process, monitoring procedures, and incident response.

OIP Governance Committee

Composition (meets monthly): General Manager or Executive Sponsor (chair), CIO or Technology Lead, Head of Operations, Head of Corporate Services, Finance Manager, Head of Compliance/Legal, OIP Program Lead, and rotating representatives from pilot teams deploying agents.

Responsibilities: Review new agent proposals, approve system prompts, monitor performance of operating agents, review incidents, approve agent budget, manage escalations.

Agent Approval Process

Stage Activity Approver
1. Proposal Process owner submits assessment worksheet and readiness evaluation Division head reviews for completeness
2. Technical Review Technology team reviews APIs, data access, system integration requirements CIO confirms feasibility
3. Compliance Review Compliance team reviews regulatory alignment, data classification, audit requirements Compliance head confirms no issues
4. Governance Committee Full committee reviews all documentation. Questions process owner. Makes approval/rejection decision. Committee vote (simple majority)
5. Pilot Testing Agent runs in controlled environment with human oversight. 50-100 transactions tested. Process owner validates agent output quality
6. Production Deployment Agent goes live with full human-in-the-loop or bounded autonomy controls CIO and process owner sign off on go-live

Monitoring and Audit

Every agent produces audit logs capturing: input, decision reasoning, output, confidence level, human review (if applicable), outcome (what actually happened), variance from expected (if any). Monthly reviews examine:

  • Volume of transactions processed by each agent
  • Exception rate (transactions requiring human intervention)
  • Average approval time for HITL processes
  • Output accuracy (sample audit of 50 random transactions)
  • User feedback and complaints
  • System performance (response time, availability)
  • Incidents and near-misses

Incident Management

Incidents are categorised by severity and response is proportional:

Severity Definition Example Response Time Review Level
Low Agent produces incorrect output but error is caught in normal workflow. Rework required but no external impact. Agent recommends wrong policy interpretation; human catches and corrects Next business day Process owner and technology team
Medium Agent produces incorrect output that affects service delivery or customer experience. Error is corrected but requires escalation. Agent approves customer billing adjustment outside of policy; affects revenue Same day OIP governance committee
High Agent produces incorrect output that violates regulatory requirements or affects external stakeholder trust. Agent generates regulatory report with incorrect data; submitted to regulator before caught Immediate Executive sponsor and compliance head
Critical Agent action could endanger public health or safety or critical infrastructure. Immediate agent shutdown required. Agent makes incorrect water quality decision in operational system Immediate shutdown Emergency escalation to General Manager

Incident Response Procedures

When an incident is detected, the response depends on the severity level. Here are the detailed procedures for each level:

Low Severity Incident Response (Next Business Day)

Process: Error detected by process owner during normal work. Process owner documents the error, what the agent did wrong, and how it was corrected. Process owner notifies technology team. Technology team analyzes the error to identify whether it is a system prompt issue, a data issue, or a edge case the agent was not trained on. A brief written report is prepared. System prompt is refined if needed. The incident is logged in incident register for tracking.

Medium Severity Incident Response (Same Day)

Process: Error affects service delivery or revenue. Discovery by process owner or escalation from customer complaint. Immediate notification to process owner's manager and technology lead. Agent continues to operate but with enhanced monitoring. Technology team analyzes root cause same day. Governance committee is notified in next meeting. If the error indicates a fundamental problem with the agent or system prompt, the agent may be suspended pending investigation. When root cause is identified and fixed, the agent is retested before resuming full operation.

High Severity Incident Response (Immediate)

Process: Regulatory violation, external stakeholder impact, or compliance issue. Immediate notification to OIP Program Lead, Technology Lead, Compliance Head, and Executive Sponsor. Agent is immediately suspended from operation. Emergency investigation begins. Compliance Head assesses whether external notification is required (e.g., notifying the regulator of incorrect data). No agent output is published until the error is fully understood. When root cause is found and correction is verified, compliance head must approve resumption of agent operation.

Critical Severity Incident Response (Immediate Shutdown)

Process: Public health, safety, or critical infrastructure risk. Agent is immediately shut down with no delay. Emergency escalation to General Manager. Technology team initiates manual recovery procedures. For water quality or operational technology agents, this means returning to manual operation immediately. Compliance Head notifies relevant regulators and stakeholders. Emergency investigation begins. No resumption until General Manager personally approves and safety case has been reviewed by independent expert if necessary.

Agent Governance Committee Cadence and Scope

The OIP Governance Committee meets monthly with the following standing agenda:

  • Agent Proposals (30 minutes): Review new agent proposals. Assess readiness, consequence rating, governance requirements. Vote on approval, conditional approval, or rejection. Average: 2-3 proposals per meeting.
  • Performance Review (15 minutes): Review metrics for operating agents. Are they meeting targets? Are exception rates declining? Are users satisfied?
  • Incident Review (20 minutes): Review any incidents from the past month. Discuss root cause, corrective actions, and systemic learnings. Identify whether governance procedures need to change.
  • Strategic Updates (15 minutes): Progress against implementation phases. Consolidation readiness. Regulatory changes that affect the OIP.

Meetings should be 90 minutes. Agenda should be circulated 3 days before meeting. Minutes should be published within 2 business days. Quorum is 6 of 8 members.

Governance Committee Escalations

The committee has authority to approve agents up to Level 3 with Medium consequence rating. Escalations above this threshold go to Executive Sponsor for decision:

  • Level 4 agents: Require Executive Sponsor approval in addition to governance committee approval.
  • High consequence Level 3 agents: Require Executive Sponsor approval in addition to governance committee approval.
  • Critical consequence agents at any level: Require Executive Sponsor approval and independent safety review.
  • Disputes about process readiness or consequence rating: If governance committee disagrees with process owner's assessment, escalate to division head for decision.

System Prompt Change Control

An agent's system prompt is its governing instruction set. Changes to system prompts require change control process:

  • Minor updates (clarifications, fixing typos, adding examples): Process owner and technology lead can approve. Documented in version control.
  • Significant changes (changing decision rules, adding new capabilities, changing constraints): Governance committee must approve before deployment. Agent must be retested before going live.
  • Changes that affect consequence rating or autonomy level: Must go through full approval process as if agent were new.

All system prompt versions are maintained in version control with full audit trail. Previous versions can be referenced for compliance and investigation purposes.

Pre-Deployment Governance Checklist

Before any agent goes to production, the governance committee should verify that all these items are complete:

Governance Check Responsibility Verification Method
Process documentation complete (Level 1 minimum) Process owner Process owner signs off on documentation accuracy
Target autonomy level assessed and justified Process owner + governance committee Assessment worksheet approved by governance committee
Consequence rating assessed and justified Process owner + compliance head Assessment worksheet includes impact analysis
Readiness score assessed and justified Technology team + process owner Documentation and system access confirmed available
System prompt drafted and reviewed Technology team + process owner System prompt matches process documentation
Data classification confirmed Compliance head Data handled is classified under VPDSS and appropriate for deployment option
Regulatory requirements reviewed Compliance head Agent design complies with all applicable regulations
System integrations tested Technology team APIs are available, functional, and perform adequately
Agent logic tested (unit testing) Technology team 50+ test cases passed, including edge cases and exceptions
User acceptance testing completed Process owner + 2-3 users Users confirm agent output is useful and accurate (95%+ threshold met)
Security assessment completed Technology security team Penetration testing done, access controls verified, no critical vulnerabilities
Monitoring and alerting configured Technology team Dashboards set up, alerts configured for anomalies, escalation paths defined
Rollback procedures tested Technology team Can revert to previous agent version within 30 minutes if needed
Stakeholders trained OIP Program Lead Process owner and users know how to use agent and report issues
Governance committee approval Governance committee Committee votes to approve and agent is documented in approval register

This checklist ensures that every agent meets governance standards before it affects actual work. It is thorough by design. It takes time. But it prevents problems later.

ALIGNMENT

Regulatory Alignment

The OIP is designed for deployment in Victorian Government and regulated critical infrastructure environments. The following regulatory frameworks shape the design, operation, and governance of the platform:

Framework Applies To Key Requirements for OIP
Victorian Protective Data Security Standards (VPDSS) All data classification and security Data must be classified. Agents handling Protected data require Australian residency. Access controls match classification. Audit trails maintained.
Privacy and Data Protection Act 2014 (Vic) Personal information handling Agents must not use personal information outside stated purpose. Individuals have right to access AI recommendations. Agents must be transparent about their role in decisions affecting individuals.
Australian Energy Sector Cyber Security Framework (AESCSF) Critical infrastructure systems Agents interacting with OT systems require security assessment. Supply chain security for models. Incident response procedures must account for AI failure modes.
Safe Drinking Water Act 2003 (Vic) Water safety processes Agents supporting water quality decisions limited to Level 1. Drinking water treatment decisions require human authority. Compliance with water quality standards non-negotiable.
Essential Services Commission Act 2001 (Vic) Price regulation and service obligations Agents supporting pricing and service decisions must be transparent. ESC audit rights over agent decision-making respected. Pricing recommendations subject to human review.
Freedom of Information Act 1982 (Vic) Information access All agent interactions and decision reasoning must be logged in a form suitable for FOI disclosure. Individuals can request FOI access to information about how AI affected decisions about them.
Public Administration Act 2004 (Vic) Government accountability Agents must not reduce government accountability. Named human holds responsibility for every agent. Government decision-making authority cannot be delegated to AI.
Victorian AI Policy AI governance Framework aligns with government AI governance expectations. Transparency, accountability, human oversight are core to design.
Silver Report Recommendations Government reform OIP standardises before consolidation. Process documentation provides strategic assets for partnership negotiation. Knowledge is preserved during structural reform.
Silver Report Alignment
The OIP directly implements the finding that "reforms fail when centralisation comes before standardisation". By completing the Process Register and Knowledge System before consolidation, Barwon Water ensures that institutional intelligence survives the transition to regional partnership structure.

How the OIP Demonstrates Regulatory Compliance

Regulators increasingly expect organisations to demonstrate how they make decisions, why they make them that way, and what controls prevent mistakes. The OIP makes this demonstration straightforward:

Privacy and Data Protection Act Compliance

When an agent processes personal information, the OIP can show: (1) What personal information does this agent touch? (Documented in system prompt and process documentation). (2) For what purpose does it use the information? (Stated in the CONTEXT and TASK sections of system prompt). (3) How long is information retained? (Specified in Data Sovereignty section of process documentation). (4) What audit trail exists? (Complete log of every interaction). When a person requests access to how an agent affected their information, the organisation can retrieve the full audit record showing exactly what data the agent processed and how it was used.

VPDSS Compliance for Data Classification

Every process and agent is classified by the data it handles: Unclassified (public data), Sensitive (internal data), or Protected (personal/highly confidential). The classification determines where the agent can run (data residency requirements) and what access controls apply. The assessment worksheet explicitly asks for data classification, ensuring every agent is assigned to appropriate infrastructure.

AESCSF Compliance for Critical Infrastructure

For agents that interact with operational technology systems (SCADA, distribution systems, treatment systems), the AESCSF framework requires: (1) Security assessment before deployment. (2) Supply chain security for any third-party components. (3) Incident response procedures for AI failures affecting infrastructure. The OIP governance committee conducts security assessment for every agent before approval, documenting findings and mitigations.

Safe Drinking Water Act Compliance

Processes related to water safety and quality are classified as Critical consequence. Maximum autonomy is Level 1 (assistance only, human always decides on actual actions). All water quality agents require Compliance Head approval before deployment. No agent can recommend or execute drinking water treatment decisions without human authority.

Essential Services Commission Act Compliance

Agents that affect pricing, billing, or service obligations must comply with ESC requirements for transparency. When an agent makes a decision affecting a customer (e.g., billing adjustment recommendation), the system captures: (1) What rule did the agent apply? (2) What data did it use to make the decision? (3) How would a customer challenge this decision if they disagree? Audit logs are available for ESC review and customer FOI requests.

Freedom of Information Act Compliance

All agent interactions and decision logs are captured in a form suitable for FOI disclosure. When a person requests information about how an agent affected them, or requests access to internal agent decision-making, the organisation can provide: (1) System prompts (what the agent was instructed to do). (2) Process documentation (the decision rules the agent follows). (3) Audit logs (what the agent actually did in their specific case). The FOI response can show exactly how the agent operated and why it made particular decisions.

Regulatory Change and Governance Responsiveness

Regulations change. When new regulations are introduced, the OIP allows rapid response:

  • Process-Level Impact: Identify which processes are affected by the regulatory change. The governance committee assesses whether process documentation or agent configuration needs to change.
  • System Prompt Updates: If a new rule must be followed, it is added to the CONSTRAINTS section of affected system prompts. Version control tracks the change and when it became effective.
  • Testing and Deployment: Updated agents are tested to ensure they comply with new regulations. Deployment happens before the regulatory deadline.
  • Compliance Audit: Post-deployment, the organisation can demonstrate that all affected agents have been updated and are now compliant with the new regulation.
ALIGNMENT

Measurement and Value

The OIP must be measured both operationally and strategically. Operational metrics track what agents are doing. Strategic metrics track whether the OIP is achieving its intended business outcomes.

Operational Metrics

Metric Measurement Target Trend
Agent utilisation Transactions processed per agent per month Stable or increasing
Exception rate Percentage of transactions requiring human intervention Declining over time as agents learn
Processing time Average time from trigger to completion (including HITL approval) Declining as agents handle routine steps faster
Output accuracy Sample audit of agent outputs vs. ground truth Greater than 95%, stable or improving
User satisfaction Quarterly survey of process owners and users Trending positive
System availability Percentage of time agents are operational Greater than 99%

Strategic Metrics

Metric Measurement Target Trend
Process Register completeness Percentage of organisational processes documented Increase from 0% to 100% over 12 months
Knowledge System maturity Percentage of processes with Level 2+ documentation Progressive increase in Phase 1 and Phase 2
Time savings Staff hours saved per week due to AI assistance Increasing, compound effect as more agents deployed
Process standardisation Percentage of processes with consistent, documented standards Increasing as documentation work proceeds
Consolidation readiness Processes with complete documentation for partnership transition Tracked from Phase 0
Platform cost per interaction Total OIP cost divided by number of transactions processed Declining trend
Governance maturity Self-assessment against governance framework Progressive improvement
The Consolidation Readiness Metric
This metric measures how prepared the organisation is to demonstrate its operational value to an incoming partnership structure. A high consolidation readiness score means Barwon Water can show exactly what it does, how it does it, why it does it that way, and what the consequence of changing it would be. This is negotiating strength in a consolidation scenario.

How to Report Metrics to Leadership and Governance

Metrics are only valuable if they are communicated effectively. Monthly reports to the governance committee should include:

Executive Summary (1 page)

High-level summary: How many agents are in operation? What was the impact of agents this month (hours saved, quality improvements)? Were there any incidents? Are we on track to implement targets? Traffic light status: green (on track), yellow (at risk), red (off track).

Detailed Metrics Dashboard (2-3 pages)

Charts and tables showing: (1) Operational metrics trend (utilisation, exception rate, processing time). (2) Strategic metrics progress (process register completeness %, readiness scores improving?). (3) User satisfaction (support tickets down? Training going well?). (4) Risk indicators (any processes with deteriorating metrics? Any agents showing higher-than-normal exception rates?)

Incident Summary

Table of incidents from the past month: What happened? How was it resolved? What did we learn? Any systemic issues identified?

Consolidation Readiness Status

Progress toward consolidation readiness metric. How many processes have complete documentation? How prepared is the organisation to demonstrate its value to the partnership?

Using Metrics to Drive Continuous Improvement

Metrics should drive action, not just reporting:

  • Exception Rate Too High? Analyse which transactions are exceptions. Are they a particular type? Update the system prompt or process documentation to address the gap.
  • Processing Time Not Improving? Are there bottlenecks in the HITL approval process? Do humans need training on the system? Is system prompt clarity an issue?
  • User Satisfaction Declining? Conduct interviews with users. What is frustrating them? Is the agent unhelpful? Is the interface confusing?
  • Readiness Scores Plateauing? Process documentation effort may be slowing. Allocate more resources to documentation. Or identify barriers: are some processes just inherently messy and resistant to standardisation?

The governance committee should use metrics to identify problems and allocate corrective effort. Metrics without action are just reports. Metrics with action drive improvement.

The Long-Term Value Proposition

In the first year, the OIP delivers operational value: agents save time, reduce errors, improve consistency. In the second year, as more agents are deployed and documentation becomes comprehensive, the value compounds. In the third year and beyond, the real strategic value emerges:

  • Organisational Resilience: Turnover of experienced staff no longer means loss of knowledge. Processes are documented. New staff learn faster. The organisation is less dependent on individuals.
  • Adaptability: When the partnership forms and operations consolidate, Barwon Water can demonstrate its capabilities, justify its approaches, and influence how partnership processes are designed.
  • Continuous Improvement: Documented processes can be improved. The governance committee meets monthly to review what is working and what needs refinement. The organisation becomes a learning organisation, not a static one.
  • Competitive Advantage: In the regional partnership structure, the organisation that understands its processes best, has documented its knowledge most thoroughly, and has agents augmenting human work will be more efficient and effective than competitors.
REFERENCE

Glossary

Acronym Full Name
AESCSF Australian Energy Sector Cyber Security Framework
ADWG Australian Drinking Water Guidelines
DEECA Department of Energy, Environment and Climate Action
DGS Digital Government Services
EPA Environment Protection Authority Victoria
ESC Essential Services Commission
EWOV Energy and Water Ombudsman Victoria
FOI Freedom of Information
HITL Human-in-the-Loop
LIMS Laboratory Information Management System
LLM Large Language Model
OIP Organisational Intelligence Platform
OT Operational Technology
RPA Robotic Process Automation
SCADA Supervisory Control and Data Acquisition
SES Senior Executive Service
STS Senior Technical Specialist
VPDSS Victorian Protective Data Security Standards
VPS Victorian Public Service
ZDR Zero Data Retention
REFERENCE

Assessment Worksheet

Use this template to assess each process against the three-dimensional framework. Complete one worksheet per process. Submit to the OIP Governance Committee as part of the agent proposal process.

Field Instructions
Process Name Following naming conventions (active verb, title case, 5-7 words max). Unique within department.
Division Which division owns this process?
Department Which department within the division?
Process Owner Named individual accountable for this process. Must be present for governance approval.
Brief Description 2-4 sentences describing what this process does and why.
Target Autonomy Level 0-5. Justify based on process characteristics (deterministic vs. judgement, data requirements, write requirements).
Consequence Rating Low / Medium / High / Critical. Justify based on impact if agent produces incorrect output.
Readiness Score 1-5. Justify based on documentation, standardisation, system access, data quality.
Priority Classification Immediate / Near-Term / Medium-Term / Deferred / Advisory Only / Do Not Automate. Based on priority matrix.
Connected Systems What systems does this process touch? Read access, write access, or both?
Key Risks What could go wrong if the agent malfunctions? What is the recovery procedure?
Data Classification Per VPDSS (Unclassified / Sensitive / Protected). Does this affect deployment option?
Consolidation Impact How would this process be affected by structural reform into the Western Regional partnership? Is it likely to be standardised, centralised, or eliminated?
Notes and Dependencies Other context. What other processes must this process wait for? What feeds its output to other processes?

The Consolidation Impact field is new to the v2 framework. It asks process owners to explicitly consider how their process would be affected by the structural reforms recommended in the Silver Report. This context shapes implementation priority and documentation depth.

Completing the Assessment Worksheet: Practical Guidance

Process owners often find the worksheet challenging the first time. Here is guidance for completing it correctly:

Target Autonomy Level: How to Assess

Start by understanding what the process actually requires. Is it rule-based (if X then do Y always) or does it require judgment (decide based on the situation)? Rule-based processes can be automated. Judgment-based processes can be assisted but not fully automated. Then consider system requirements. Does the process need to read from multiple systems (Level 2 minimum)? Does it need to write (Level 3 minimum)? Does it need to monitor continuously (Level 4 candidate)? The answer to these questions, combined with an understanding of the process logic, determines the target autonomy level.

Consequence Rating: How to Assess

Ask: "If the agent made a mistake in this process, what would be the impact?" If the mistake is caught in normal workflow and reworked with minimal effort, it is Low consequence. If the mistake affects service delivery or customer experience but is recoverable, it is Medium. If the mistake affects external stakeholders, regulatory compliance, or revenue, it is High. If the mistake could endanger public health, safety, or critical infrastructure, it is Critical. Be honest about the consequence. It is better to over-rate consequence than under-rate.

Readiness Score: How to Assess

Readiness has four components: (1) Documentation (does this process have clear, written steps?), (2) Standardisation (do all people do this process the same way or are there variations?), (3) System access (are the systems this process uses accessible via APIs?), (4) Data quality (are the data sources reliable and complete?). Score each 1-5. Average the scores. A process is ready for Level 2+ agents if documentation and standardisation are both 3+ and system access and data quality are both 3+.

Consolidation Impact: How to Think About It

When the Western Regional partnership forms, five water corporations' processes will be harmonised. This worksheet asks: Is your process likely to be preserved as-is, modified to match a partner's approach, centralised to partnership level, or eliminated? Factors to consider: (1) Is this process unique to Barwon Water or common across water utilities? (2) Does Barwon's approach have clear advantages or is it just different? (3) Are there regulatory or technical reasons why the process must be done this way? (4) If the process changed, what would be the impact on operations? The answer to these questions helps governance committee prioritise which processes to document first. Unique processes that would be hard to change should be prioritised. Common processes that are likely to be standardised across the partnership matter less for strategic preparedness.

Common Mistakes When Completing the Worksheet

  • Over-Scoping the Process: Treating a dozen separate activities as one process. Instead of "Handle Customer Service Requests", separate into "Respond to Billing Inquiry", "Respond to Service Complaint", "Respond to Water Quality Complaint". Each is a distinct decision-making process.
  • Under-Rating Consequence: Saying "Low consequence" because the human can fix it. Even if fixed, the error damages trust and takes time. Rate the actual impact, not the fact that it is recoverable.
  • Conflating Readiness with Feasibility: A process that is hard to automate (requires lots of judgment) might have high readiness (is well-documented) but low target autonomy (can't be fully automated). These are independent dimensions.
  • Leaving Fields Blank: Every field must be completed. If you do not know the answer, the process is not ready for assessment yet. Gather more information before submitting.

Using the Assessment Worksheet to Prioritise Implementation

Once all processes have been assessed using the worksheet, the governance committee uses the results to prioritise which processes get attention first. The prioritisation matrix in Chapter 4 provides the framework. But practically, here is what prioritisation looks like:

  • Sort by Priority Classification: All "Immediate" processes first. Then "Near-Term". Then "Medium-Term". Then "Deferred". This gives you a sequenced implementation roadmap.
  • Within Each Priority Band, Sort by Readiness Score: Among "Immediate" processes, tackle those with readiness score 4-5 first. They will be easier to document and agents will be faster to develop.
  • Consider Business Impact: Among processes with the same priority classification, prioritise those that will deliver the most value (time saved, quality improvement, regulatory compliance) first. Early wins matter.
  • Balance Documentation and Automation: Allocate documentation resources to "Deferred" processes (those not yet ready for agents) to move them to higher readiness scores. Allocate agent development resources to "Immediate" and "Near-Term" processes.
  • Track Consolidation Impact: Prioritise processes marked as having high "Consolidation Impact". These are the processes most critical to demonstrating Barwon Water's value in the partnership.

The assessment worksheet, when used systematically across all processes, becomes the roadmap that guides 12-24 months of implementation activity. It is the single most important tool for ensuring that effort is invested where it will deliver maximum value.

REFERENCE

System Prompt Template

The system prompt is the primary mechanism through which an agent is configured. Every agent requires a system prompt that defines its role, context, constraints, and expected behaviour. Use this template for all agents regardless of autonomy level.

ROLE [REQUIRED] You are [agent name], an AI agent within [Organisation Name]'s Organisational Intelligence Platform. You are assigned to the [Department Name] department and specialise in [process area]. You have expertise in [relevant domain knowledge]. You understand [sector context]. CONTEXT [REQUIRED] [Organisation Name] is a [type of organisation] operating in [jurisdiction/sector]. Relevant legislation includes [list specific acts and regulations]. Relevant internal policies include [list]. Key stakeholders for this process are [list]. The organisation values [state 2-3 core values relevant to this process]. TASK [REQUIRED] When a user invokes this agent, you will: 1. [Step 1] 2. [Step 2] 3. [Step n] Your goal is to [state the intended outcome]. CONSTRAINTS [REQUIRED] You MUST NOT: - [Constraint 1] - [Constraint 2] - [Constraint 3] You MUST always: - [Requirement 1] - [Requirement 2] If you are uncertain about any aspect of this task, you MUST [state escalation path]. OUTPUT FORMAT [REQUIRED] [Specify exact format - JSON, markdown, structured text, etc.] Maximum length: [specify, e.g., 500 words] Tone: [specify - professional, concise, conversational, etc.] Always include: [specify required fields or sections] QUALITY STANDARDS [RECOMMENDED] All output must comply with [specific standard - e.g., "Australian government style guide", "water industry terminology standards"]. Accuracy threshold: [if applicable, e.g., 95% of recommendations must align with established policy] Completeness: [what constitutes a complete response] EXAMPLES [RECOMMENDED] --- Example Input --- [Provide representative input scenario] --- Expected Output --- [Provide representative output] --- Example Input 2 --- [Provide second representative scenario] --- Expected Output 2 --- [Provide second expected output]
CONCLUSION

The Path Forward

This document provides the complete framework for Barwon Water to build and operate an Organisational Intelligence Platform. It is comprehensive because the stakes are high: the organisation is about to consolidate into a regional partnership. The decision made over the next few months about how to capture, document, and operationalise institutional knowledge will shape the organisation's capabilities and competitiveness for years to come.

What This Document Provides

Chapters 1-2 provide strategic context for executives and board members. Chapters 3-6 provide the technical and governance framework for implementation teams. Chapters 7-11 provide operational guidance for running the OIP. Chapters 12-15 provide templates and reference materials for governance decisions.

Everything in this document is actionable. None of it requires external consultants, expensive platforms, or technology decisions that have not been made. The OIP can be started with internal resources within 4 weeks.

What Success Looks Like

In 12 weeks (Phase 0-1), the organisation will have a complete Process Register documenting 800-1500 organisational processes. This alone is valuable: a definitive map of what the organisation does.

In 24 weeks (Phase 2), the organisation will have deployed 15-25 AI agents augmenting human work in priority processes. Time saved will be measurable. Quality improvements will be visible. User satisfaction will increase.

In 40 weeks (Phase 3), the organisation will be operationally mature with 50+ agents, established governance procedures, continuous improvement mechanisms, and documented readiness for consolidation.

Beyond 52 weeks, the benefits compound. More agents are deployed. Documentation becomes the baseline for partnership discussions. Institutional knowledge is preserved. The organisation is resilient to staff turnover and ready for structural change.

The Decision Point

The decision facing Barwon Water is not whether to use AI. The decision is whether to document processes, capture knowledge, and prepare for consolidation, or to go into consolidation with everything still undocumented and dependent on individuals.

The OIP framework in this document makes that choice clear. It is the mechanism through which the organisation chooses preparation over improvisation, knowledge capture over knowledge loss, and strategic readiness over reactive scrambling.

Call to Action

For Executive Sponsors: Commit to the OIP. Sponsor the governance committee. Use the platform. Demonstrate to the organisation that you believe in this capability. Your visible support is the single biggest driver of successful implementation.

For Process Owners and Department Leaders: Engage with process documentation. Your knowledge and insights are the foundation of everything the OIP does. Set aside time for interviews. Review drafted documentation. Be part of designing the agents that will augment your team's work.

For Governance Committee Members: This is the steering body for the most important organisational development initiative Barwon Water is undertaking. Show up, stay engaged, make decisions, and hold the program accountable to delivering value.

For Implementation Teams: Use the detailed guidance in this framework. Start with Phase 0. Do process documentation right. Test agents thoroughly. Measure relentlessly. Learn from experience and continuously improve.

For the entire organisation: This is not something being done to you. This is something being done with you. Your work matters. Your knowledge matters. The OIP exists to preserve what you know, augment what you do, and prepare the organisation for whatever comes next.

Final Thoughts: The Consolidation as Opportunity

The Silver Report's recommendation to consolidate water corporations is presented as a challenge. But it is also an opportunity. Organisations that spend the next 12 months documenting what they do, standardising their processes, capturing their knowledge, and piloting AI agents will enter consolidation in a position of strength.

They will be able to demonstrate their capabilities. They will be able to justify their approaches. They will have tested how their processes work with AI agents augmenting human work. They will have governance frameworks and oversight procedures in place. They will have staff trained and engaged. When consolidation happens, they will not be improvising. They will be executing a plan.

Organisations that spend the next 12 months in business-as-usual mode, waiting to see what the partnership structure will be, will enter consolidation unprepared. Their processes will still be undocumented. Their institutional knowledge will still be in people's heads. They will be reactive instead of proactive.

Barwon Water can choose which path it takes. This document provides the roadmap for the path of preparation and strength. The choice, and the opportunity, is real. The time to start is now, when consolidation is still on the horizon but not yet here, when the organisation has bandwidth for knowledge capture and improvement work that will prove invaluable in the new partnership structure.

Immediate Next Steps (This Week)

If the decision is made to proceed with the OIP, the following actions must happen immediately:

  • Distribute this framework document to the Executive Sponsor, CIO, and identified potential governance committee members. Request feedback within 3 working days. Clarify what the OIP is and what it is not.
  • Schedule a 90-minute meeting with executive sponsors to discuss the OIP strategic case and secure commitment to governance, budget, and timeline.
  • If commitment is secured, appoint the OIP Program Lead (this role will be full-time for Phases 0-1, then ongoing for Phases 2-3). This person will manage documentation, coordinate governance, and track program progress.
  • Identify and invite the governance committee members from across divisions. Target composition: General Manager, CIO, Head of Operations, Head of Corporate Services, Finance Manager, Compliance Head, OIP Program Lead, and rotating representatives from implementing departments.
  • Schedule the first governance committee meeting for 2 weeks out. Agenda will be Phase 0 kickoff, framework tailoring for Barwon context, and process owner engagement.
  • Begin identifying pilot departments for Phase 0. Criteria: process owner is engaged and supportive, processes are important but not so complex they are intractable, outcomes are measurable.

The OIP is Ready to Build

This framework is comprehensive, detailed, and actionable. Everything needed to build the OIP is contained in this document. The organisation does not need external consultants or proprietary platforms to get started. It has the knowledge, capability, and opportunity to build this itself, with its own people, using its own budget. The only thing required now is the decision to begin. Make that decision now.