Translating the Science of Intelligence into Enterprise AI Operating Model Design
A cross-disciplinary synthesis of consulting industry research, neuroscience, network science, and complex systems theory.
Megan C. Starkey · CEO, RBD.
February 2026
Drawing on 48 sources across seven consulting firms, peer-reviewed scientific literature, and executive surveys.
Executive Summary
The Case for a New Design Principle
Across seven major consulting and research firms, the diagnosis is consistent: AI transformation fails for organizational reasons, not technological ones. The consensus prescription — align the operating model around AI — is well-reasoned but structurally incomplete. Even its architects are beginning to say so.
Bain acknowledges the gap is execution. BCG states that scaling requires new processes, not new tools. HBR concludes that the fundamental issue is whether the organization's underlying design still fits what AI now makes possible.
This brief synthesizes evidence from three independent scientific disciplines that converge on a single structural insight: intelligence in complex environments is distributed rather than centralized, depends on weak ties rather than strong ones, and emerges from design rather than alignment. These are not metaphors borrowed from biology. They are measurable, replicable principles that govern how any complex system — a brain, a network, an enterprise — maintains intelligent performance across many interdependent components simultaneously.
The gap between an organization that has organized around AI and one that is designed for intelligence is the problem this brief identifies — and the challenge executive leadership must address next.
84%
have not redesigned work processes for AI
89%
still operate industrial-age models
1%
have scaled to network-based operating models
RBD. analysis of Deloitte (2026) and McKinsey (2025) industry data.
Is your operating model ready for what AI actually requires?
Find out in 3 minutes. Our un-gated capability diagnostic evaluates eight dimensions of organizational readiness — no email required.
The Operating Model Was Designed for a Different Problem
The prevailing enterprise operating model is, at its core, a coordination system. It was engineered to solve a specific challenge: how to manage complexity, allocate resources, and execute strategy across thousands of people with consistency and accountability. For the better part of a century, it has done so effectively.
The model rests on three structural characteristics. First, information flows vertically — up for decisions, down for execution — ensuring that specialized knowledge reaches senior leadership and that strategic direction cascades into operational plans. Second, coordination happens through formal handoffs between functions, each optimized for depth of expertise within its domain. Third, governance operates periodically — through committees, review cycles, and approval gates — creating stability through defined roles and predictable processes.
These are not flaws. They are design features of a model optimized for environments where the cost of error exceeded the cost of delay — — characterized by stable competitive dynamics, predictable demand, and relatively slow rates of change. The model's deep roots in post-war organizational theory reinforced its durability: specialization, hierarchical accountability, and centralized planning became the default design logic of every major enterprise.
AI fundamentally inverts that operating equation. It does not operate within functional boundaries. A single model may require customer attributes from Marketing, transaction sequences from Sales, and exposure thresholds from Risk — all at once. It requires cross-functional data flow, real-time feedback, and distributed expertise at the point of work. When an enterprise attempts to deploy a technology whose value depends on integration through a structure whose design enforces separation, the result is predictable: structural friction that erodes ROI before it can compound. This is now measurable across the industry.
The operating model is not failing because it is poorly executed. It is performing exactly as designed — for a set of conditions that no longer exist.
02
Where AI Actually Stalls: Five Friction Points
When organizations deploy AI inside an industrial-age operating model, the failure modes are predictable and consistent. They are not technology failures. They are structural mismatches between what AI requires and what the operating model provides.
Exhibit 1: Structural friction between AI requirements and industrial-age design
Each friction point represents a design constraint, not an execution failure
RBD. synthesis. Friction points derived from consulting research (BCG, McKinsey, Bain, Deloitte, 2024–2026) and organizational design literature.
These five friction points explain why most AI transformation falls into one of two traps: the "Scattered Pilot" approach — dozens of disconnected experiments launched hoping for value, producing resource competition and fragmented data — or the "Centralized Bottleneck" approach, where a Center of Excellence concentrates expertise but creates accountability gaps and governance slow enough to drive shadow IT. Neither trap is caused by poor leadership. Both are structural consequences of forcing integrated technology into a separated operating model.
03
The Industry Response: Align the Operating Model
Seven major consulting and research firms have published AI operating model frameworks in the past 18 months. Despite differences in terminology, they converge on a single recommendation: realign the operating model around AI.
Firm
Framework
Core Thesis
McKinsey
"The Agentic Organization" (2025)
AI is 20% algorithms, 80% organizational rewiring.
Deloitte
"The Great Rebuild" (2026)
Rebuild operations from the ground up for AI.
BCG
10/20/70 Rule (2026)
AI transformation is a workforce transformation.
Gartner
CIO Operating Model Restructuring (2025)
Legacy operating models will not deliver AI value.
Bain
"When Org Structure Isn't Enough" (2025)
Structural changes must be lived, not just designed.
EY
AI Value Blueprints (2025)
Reimagine as if built for AI from scratch.
Accenture
Capability Building Teams (2025)
Diffuse AI capability into business units.
RBD. analysis of published frameworks, 2024–2026.
These frameworks are well-constructed and evidence-based. They share the same structural limitation: they treat the organization as a machine to be reconfigured rather than a complex system that must be redesigned for a fundamentally different kind of performance.
Notably, the firms themselves are beginning to acknowledge this. Bain reports that when organizations pair generative AI with end-to-end process transformation, productivity gains reach 25–30% — but without it, gains plateau at 10–15%. BCG states explicitly that scaling AI requires new processes, not just new tools. And HBR's 2026 analysis concludes that incumbents adopt AI aggressively but see only marginal gains because they use it to optimize existing work rather than to rethink how work is organized.
You cannot transform the organization through the very structure that constrains it. The question is: what should the new design principle be?
04
What Living Systems Reveal About Organizational Design
If the current operating model is a design problem, the question becomes: what design principles should replace it? Three independent scientific disciplines — none of them developed for organizational theory — converge on the same answer.
The idea of looking to natural systems to inform organizational design is not new; it underpins the entire field of modern systems theory. Stafford Beer, creator of the Viable System Model, argued that any viable organism must regulate itself through feedback, adapt internally to external variety, and maintain coherence despite turbulence. The technologies reshaping business today are themselves direct translations of nature's logic: neural networks are modeled after the brain's architecture, swarm intelligence informs distributed computing, and simulated annealing optimizes complex problems based on thermodynamic cooling. An organization that can host AI is one that is integrated, adaptive, coordinated, responsive, and self-corrective — all principles of living systems. The naturalist model is no longer merely relevant. It is essential.
The Human Brain: How Intelligence Actually Works
A 2025 study published in Nature Communications mapped how general intelligence operates in the human connectome. Researchers studied 831 participants, analyzing both structural and functional connectivity. The findings directly challenge how most organizations are designed.
No single network drives intelligence. Whole-brain network integration predicted cognitive performance better than any individual region. The predictive value lies in the connections between functions, not in any individual function — the organizational equivalent of realizing that no single department, no matter how capable, drives enterprise performance alone.
Weak, long-range connections are the strongest predictors. Strong connections matter locally, within clusters. But the connections that predicted intelligence across the whole system were weaker ties spanning long distances. A 2012 PNAS study independently confirmed this: weak functional connections predict system-wide performance; strong connections alone do not. In organizational terms, the cross-functional touchpoints that efficiency initiatives routinely eliminate are precisely the connections that produce the most valuable outcomes.
Distributed control reduces system cost. Research in network control theory (Gu et al., Nature Communications, 2015) demonstrates that multiple distributed control nodes reduce overall energy expenditure compared to centralized single-point control. Cognitively demanding states — the organizational equivalent of high-stakes, high-ambiguity decision-making — require more energy to maintain, and distributed design reduces that cost.
Intelligence requires balance between specialization and integration. A 2021 PNAS study found that optimal cognitive performance emerges not from specialization or integration alone, but from the precise balance between them. Too much specialization produces fragmentation. Too much integration produces noise. The design that produces the highest performance balances both.
Social Networks: How Groups Think
In 1973, sociologist Mark Granovetter demonstrated that low-intensity, infrequent connections between individuals are more effective than strong ties for diffusing information and enabling innovation. The mechanism: weak ties function as bridges between otherwise disconnected communities. A 2024 analysis of over 37,000 open-source projects confirmed this in modern technological contexts: knowledge acquired through weak interactions was a stronger predictor of innovative output than the volume of strong interactions.
Research published in Science (Woolley et al., 2010) then identified a measurable "collective intelligence factor" — a c-factor — that predicts group performance across diverse tasks. The finding: the c-factor does not correlate with average or maximum individual IQ. It correlates with social sensitivity, equality of conversational turn-taking, and communication structure. Collective intelligence is a property of group design, not member ability.
Complex Systems: How Scale Works
Research at the Santa Fe Institute demonstrates that companies scale sublinearly — they slow down as they grow — while cities scale superlinearly, becoming more innovative per capita as they expand. The differentiator is not resources, talent, or technology. It is network design: hierarchical, efficiency-optimized networks versus open, diverse, high-connectivity networks.
Neuroscience
Network Science
Complex Systems
↓
Intelligence is a property of design, not alignment.
RBD. synthesis of neuroscience, network science, and complex systems research, 1973–2025.
Want to explore how these principles apply to your organization?
We work with executive teams to translate operating model science into practical transformation roadmaps.
If intelligence is a design property, what does that design actually look like in an enterprise? The scientific evidence converges on five structural elements that distinguish organizations capable of distributed intelligence from those optimizing within existing constraints.
Exhibit 2: Five structural elements of an intelligence-capable operating model
Each element maps to a measurable scientific principle
From
Centralized AI team builds and deploys models
→
Local Intelligence Clusters
Empowered domain teams own AI within their operational context
From
Siloed departments with formal handoff processes
→
Weak-Tie Bridges
Deliberate cross-functional connections carrying novel information
From
Periodic review committees and approval gates
→
Distributed Governance
Decision authority distributed to specialized nodes with defined escalation paths
From
Annual planning cycles and quarterly reviews
→
Continuous Adaptation Loops
Real-time feedback that self-corrects locally
From
Central command directing all AI initiatives
→
Orchestration Nodes
Light coordinating function that enables, not controls
RBD. synthesis. Elements derived from neuroscience (Nature Communications, 2025), network science (Granovetter, 1973; Centola, 2022), and complex systems theory (SFI; Beer, Viable System Model). See also Starkey, The Intelligence Organization (RBD. Press, 2025).
06
How This Works in Practice: Distributed Governance
The concept most likely to be unfamiliar — and most critical to understand — is distributed governance: a model in which decision authority is distributed across multiple coordinated nodes rather than concentrated at the center.
This is not decentralization. Decentralization removes the center. Distributed governance retains it — but redefines its role from directing to orchestrating. Each node has a specific function: Decision Nodes manage authority distribution and escalation. Committee Nodes provide coordination through specialized expertise. Security and Compliance Nodes operate as continuous, real-time functions rather than periodic reviews. Portfolio Nodes apply quantitative frameworks to balance business value against implementation complexity. Alignment Nodes build the stakeholder trust that ensures AI initiatives are not only effective, but adopted. Information flows through deliberate weak ties between nodes — the connections that network science identifies as the strongest predictors of system-wide performance. When governance operates this way, it stops functioning as overhead and starts functioning as organizational intelligence.
Exhibit 3: Distributed governance — How distributed decision rights operate in practice
Based on the Intelligence Organization™ governance framework. Six specialized nodes connected by weak ties, with a coordinating orchestration function at center.
How this works in a real scenario: Consider a multi-hospital health system deploying clinical decision support AI. In the aligned model, a centralized AI team builds the model, IT manages infrastructure, a governance committee meets monthly, and each hospital adapts independently. When the EHR platform undergoes an upgrade that changes field names in datasets, or Radiology upgrades hardware with different calibration norms, no mechanism connects the signals. Six weeks pass before degraded recommendations are identified — and during that time, the model has been learning the wrong things, putting patients at risk. Everyone did their job, and yet — disaster.
Aligned Model
Centralized AI team builds the model
Sequential departmental approvals
Governance committee meets monthly
Each hospital adapts independently
Cascade of silent failures across departments no single leader has the authority to address
Living-System Design
Each hospital has an empowered team — clinical, technical, operational — monitoring AI in its local context
Decision rights to flag issues and adjust locally without central approval
Teams connected via weak-tie bridges and a light orchestration function
Real-time model health dashboard; governance as continuous signal
A 2023 PNAS study of 2,941 clinicians showed structured information-sharing networks measurably improved diagnostic accuracy
Organizational network analysis confirms the gap. Research across 300+ organizations shows only 50% overlap between formally recognized contributors and those who actually drive 20–35% of value-adding collaborations. The formal operating model does not capture — and often actively impedes — actual information flow.
Practitioner Perspective
"Product-led organizations are better positioned for AI because product-led thinking starts with the premise that the people closest to the problem are best positioned to solve it. When you design for human judgment at the edges, you end up with an organization that can also host and extend artificial intelligence in ways that centralized, command-and-control, approval-heavy models simply cannot. This is why we believe operating model transformation and AI strategy are not two conversations — they are one."
On the overlap between product-led operating models and intelligence design: Tuckpoint Advisory Group's work in product-led operating model transformation shares foundational principles with the living-system design presented in this brief. Product-led thinking distributes decision rights to teams closest to the problem, organizes around outcomes rather than outputs, and replaces sequential approval with continuous feedback loops — structural principles that the neuroscience and network science research in this brief independently validates. The convergence suggests that organizations already investing in product-led transformation may be closer to an intelligence-capable design than they realize, and that the next step is extending those principles beyond technology teams to the enterprise as a whole.
07
The AI Maturity Spectrum — and Where 95% of Value Sits
Most organizations fall on a spectrum of AI maturity. The consulting frameworks identified in this brief effectively move organizations from Stage 1 to Stage 2. The gap between Stage 2 and Stage 3 is where an estimated 95% of unrealized value sits — and it requires a fundamentally different design principle.
Exhibit 4: AI maturity spectrum — from bolted-on to adaptive
McKinsey data: 89% of organizations remain in Stages 1–2. The gap between Stage 2 and Stage 3 requires new design principles, not further alignment.
~50%
Bolted-On
AI deployed as point solutions. No organizational change. Value limited to isolated tasks.
~39%
Aligned
Operating model adjusted. Governance and data strategies formalized. Scaling stalls.
~10%
Designed
Operating model redesigned for distributed intelligence. Weak ties deliberately cultivated.
<1%
Adaptive
Self-organizing around change. Network continuously reconfigures. Scales superlinearly.
The Design Gap — Where 95% of Unrealized Value Sits
RBD. synthesis. Distribution estimates based on McKinsey (2025) and Deloitte (2026) industry data.
The failure rate data reinforces the pattern:
Forrester: Agentic AI failure
75%
BCG: No material AI return
60%
CEOs: Slowed implementation
60%
Gartner: Projects to be canceled
40%
CEOs: Both cost + revenue gains
12%
McKinsey: AI high performers
6%
BCG: Value at scale
5%
RBD. synthesis of published consulting research, executive surveys, and analyst forecasts, 2024–2026.
Where does your organization sit on this spectrum?
Our team can help you assess your current maturity and design the next phase of your AI operating model evolution.
The evidence presented in this brief points to three immediate priorities for CIOs, CAIOs, and executive teams navigating AI at scale.
01
Map Communication Design, Not Reporting Structure
Audit where information actually flows — not the org chart, but actual communication and collaboration patterns. Organizational network analysis across 300+ organizations reveals only 50% overlap between formally recognized contributors and those who drive value-adding collaborations. Where AI initiatives are concentrated in one part of the network with no weak ties to the rest of the organization, scaling constraints are predictable. Structure shapes behavior regardless of intention.
02
Diagnose Your Maturity Stage — Honestly
Organizations that score high on operating model alignment but cannot scale AI past a limited number of use cases have likely reached the Stage 2 ceiling. The next investment should focus not on additional pilots or further alignment but on how intelligence moves through the organization: decision rights, cross-boundary connections, and distributed governance. When organizations pair AI with end-to-end design transformation, productivity gains reach 25–30%; without it, gains plateau at 10–15%.
03
Reframe the Strategic Question
Distinguish between optimizing the current organization for AI and designing an organization for intelligence. The first optimizes existing structures. The second designs a system capable of continuous adaptation. The scientific evidence — drawn from 48 sources across seven consulting firms, peer-reviewed neuroscience, network science, and complex systems theory — indicates that only the latter has the structural capacity to scale.
The companies that figure this out will build organizations that can think with AI, not just use AI.
Sources
This brief synthesizes findings from the consulting and scientific sources listed below. All data points and frameworks are attributed to their original authors. RBD.'s contribution is the cross-disciplinary synthesis and the identification of the structural gap between alignment-based and design-based approaches.
RBD. Research
Starkey, M.C. The Intelligence Organization: How & Why Companies Must Evolve to Outcompete in the AI Era. RBD. Press, 2025.
RBD. “A Blueprint for Enterprise-Wide GenAI Transformation.” 2025.
Industry Research
BCG. "Where's the Value in AI?" October 2024. · BCG. "Are You Generating Value from AI? The Widening Gap." October 2025. · BCG. "AI Transformation Is a Workforce Transformation." 2026. · BCG. "Scaling AI Requires New Processes, Not Just New Tools." 2026. · Bain & Company. "Are You Organized to Reap Value from Generative AI?" 2025. · Bain & Company. "When a New Organizational Structure Isn't Enough." 2025. · Bain & Company. "The Gap Between AI Strategy and Reality Is Execution." 2025. · Deloitte. "The Great Rebuild: Architecting an AI-Native Tech Organization." 2026. · Deloitte. "From Ambition to Activation: State of AI in the Enterprise." 2026. · EY. "EY.ai Value Blueprints." December 2025. · Gartner. "Predicts 30% of GenAI Projects Will Be Abandoned After POC." July 2024. · Gartner. "Predicts Over 40% of Agentic AI Projects Will Be Canceled." June 2025. · Gartner. "Predicts 2026: CIOs Must Restructure IT Operating Models." October 2025. · McKinsey & Company. "Beyond Transformation." 2025. · McKinsey & Company. "The Agentic Organization." 2025. · McKinsey & Company. "The State of AI: How Organizations Are Rewiring." March 2025. · Accenture. "Rethinking IT Operating Models for the Modern Enterprise." 2025.
Executive Surveys & Analyst Research
PwC. "2026 Global CEO Survey." 2026. · EY. "CEO Outlook 2026." 2026. · Forrester. "Predictions 2026." 2026. · WEF. "From Potential to Performance." January 2026. · PMI. "Strategy-Execution Gap." 2025. · HBR. "Most AI Initiatives Fail." November 2025. · HBR. "Why New Technologies Don't Transform Incumbents." February 2026. · HBR. "AI Doesn't Reduce Work—It Intensifies It." February 2026.
Scientific Literature
Wilcox et al. "The network architecture of general intelligence." Nature Communications, 2025. · Gallos et al. "A small world of weak ties provides optimal global integration." PNAS, 2012. · Schäfer et al. "Segregation, integration, and balance of large-scale resting brain networks." PNAS, 2021. · Gu et al. "Controllability of structural brain networks." Nature Communications, 2015. · Granovetter. "The Strength of Weak Ties." American Journal of Sociology, 1973. · Fang et al. "Weak Ties Explain Open Source Innovation." arXiv, 2024. · Centola. "The Network Science of Collective Intelligence." Trends in Cognitive Sciences, 2022. · Woolley et al. "Evidence of a Collective Intelligence Factor." Science, 2010. · Malone. Superminds. MIT Press, 2018. · "Experimental Evidence for Structured Information-Sharing Networks Reducing Medical Errors." PNAS, 2023. · Brush et al. "Conflicts of interest improve collective computation." Science Advances, 2018. · Gershenson. "Self-organizing systems." npj Complexity, 2025. · West. Scale. Penguin Press, 2017. · Beer, S. Brain of the Firm (Viable System Model). Wiley, 1972. · Rob Cross. Organizational Network Analysis research. 300+ organizations.
We help organizations evolve to outcompete in the AI era.
We partner with boards, executive teams, and functional leaders to develop integrated AI strategies that deliver near-term impact while evolving the organization toward intelligence at scale.