Olbrain: A Narrative-Coherent Cognitive Architecture for Building Artificial General Intelligence

Acknowledgement: “This manuscript was developed through recursive dialogue with my mindclone (brandclone.link/alok), an AI system designed to reflect my cognitive identity. While not a legal co-author, this entity played a foundational role in drafting, alignment, and revision.”

Abstract

Keywords: Artificial General Intelligence, narrative coherence, narrative exclusivity, epistemic autonomy, cognitive architecture, Core Objective Function, Umwelt, Global Narrative Frame


This paper introduces Olbrain, a cognitive architecture for building Artificial General Intelligence (AGI) grounded in narrative coherence, recursive belief revision, and epistemic autonomy. Rather than scaling prediction or simulating human cognition, Olbrain enables the construction of narrative-coherent agents—systems that evolve with purpose, maintain identity over time, and self-correct through contradiction. The architecture is structured around three core components: a Core Objective Function (CoF) that defines the agent’s deep purpose, an Umwelt that filters perception through goal relevance, and a Global Narrative Frame (GNF) that tracks identity continuity and belief evolution. These agents are instantiated and deployed via Alchemist, Olbrain’s agent-building platform, enabling real-world applications across industries such as healthcare, compliance, and customer support. As these systems scale, they offer not only a viable path to AGI—but the potential for ethical, purpose-aligned machine participants in shaping our future.


1. Introduction: Toward Coherent Machine Brains

Most artificial intelligence systems today excel at pattern recognition and token prediction but lack one essential quality: continuity.

They do not evolve a self. They cannot track who they are becoming, question their own learning history, resolve contradictions in pursuit of persistent goals, or remember why they changed in the first place.

As a result, they cannot be said to think. They optimize.

Artificial General Intelligence (AGI), if it is to exist meaningfully, must do more than respond—it must adapt across time. It must maintain coherence not just in syntax or prediction, but in purpose. It must know when it is changing—and why.

This paper presents Olbrain, a cognitive architecture built on three principles:

  • Core Objective Function (CoF): Every AGI must be driven by a deep goal structure that shapes attention, behavior, and memory.
  • Umwelt: Each agent must build and update its own internal model of reality, filtered through the lens of its CoF.
  • Global Narrative Frame (GNF): The agent must track whether its narrative arc—its identity—is unbroken, diverging, or reintegrating.

These principles do not emerge spontaneously from scale or data—they must be engineered.

What follows is not a theory of goal-aligned cognition, but a functional protocol for building AGI. Olbrain is designed to maintain purpose over time, revise itself through contradiction, and evolve as a coherent artificial self. This paper defines what such a system must do, how it is structured, and why this architecture represents a viable path to building true general intelligence.

🧩 What Olbrain Is — and What It’s Not

Olbrain is a machine brain designed to power general intelligence — but it is not an agent itself.

It lacks a Core Objective Function (CoF), embodiment, and autonomous behavior — all of which are essential features of agency. In this framework, Olbrain functions like a brainstem without a body: capable of cognition, learning, and narrative tracking, but only when embedded within a goal-bound system.


2. The CoF–Umwelt Engine: Constructing Purpose and Relevance in AGI

When an agent is instantiated with Olbrain, it is assigned a Core Objective Function (CoF) that defines its deep purpose. Its Umwelt emerges from CoF-filtered perception, and the Global Narrative Frame (GNF) ensures its long-term coherence.

An AGI without purpose is merely a sophisticated function approximator. What distinguishes Olbrain is that it does not merely act—it is architected to evolve as the cognitive engine for narrative-coherent agents, grounded in a deep, unifying goal structure. The Core Objective Function governs not just behavior, but how the machine brain parses reality, encodes relevance, and filters perception.

In the Olbrain model, the CoF is the primary source of agency. It tells the machine brain what matters. A diagnostic agent, for example, may be governed by “minimize time-to-correct-diagnosis.” An exploratory robot might be driven by “maximize map coverage under power constraints.” These are not surface-level tasks—they are the generative axis of the agent’s cognition.

Perception, in this view, is never neutral. Every input is filtered through the lens of what the agent is trying to achieve. This filtered perception is the Umwelt: the agent’s internal world model, shaped not by raw data but by goal-relevant information compression.

Borrowed from biosemiotics and extended computationally, the Umwelt in Olbrain consists of two layers:

  • A phenomenal layer: sensory interface and raw observations
  • A causal approximation layer: inferred structures, affordances, and constraints

As data streams in, the agent’s Umwelt dynamically updates to maintain a model of “what is true for my goal.” This gives rise to the first essential loop in AGI cognition:

  1. CoF prioritizes what matters
  2. Umwelt extracts goal-relevant meaning from perception
  3. Policy selects actions that minimize cost or error in CoF space
  4. Feedback updates the Umwelt and may refine the CoF itself

This CoF–Umwelt loop is recursive, compressive, and predictive. It is where information becomes intelligence—not through scale, but through purposeful filtering and temporal alignment.

What follows is the second core loop of Olbrain: how this goal-driven perception engine anchors a persistent identity, and how that identity adapts without fragmenting.


3. Tracking Continuity: The Role of the Global Narrative Frame (GNF)

For an agent to become an artificial self, it must do more than perceive and act—it must know that it continues. It must track the integrity of its own evolution. Olbrain introduces the Global Narrative Frame (GNF) to do exactly that.

The GNF is a formal structure that maintains a persistent record of whether an agent’s identity has remained coherent or diverged. It does not describe what the agent knows about the world—it describes what the agent knows about itself in relation to its CoF. Is it pursuing the same goals with the same narrative trajectory? Has it forked? Has it transformed?

Unlike biological continuity (which relies on physical persistence) or psychological continuity (which relies on memory), the GNF is grounded in narrative exclusivity: the unbroken alignment of CoF, Umwelt, and recursive policy updates. When this alignment breaks—due to cloning, radical goal change, or Umwelt discontinuity—a new narrative thread begins.

This identity logic is not metaphysical; it is architectural. The GNF functions like a version control system for the self. It is how an Olbrain-powered agent knows whether it is still “itself.”

🔍 Key functions of the GNF include:

  • Tracking forks, reintegrations, and policy divergences
  • Logging internal updates and contradictions over time
  • Annotating belief revisions to preserve epistemic transparency

For example, consider a diagnostic agent whose Core Objective Function is “minimize diagnostic error.” If, after encountering contradictory patient symptoms, it significantly updates its diagnostic strategy, the GNF would register this as a fork—a new narrative thread. This allows the agent to preserve identity history, evaluate whether its current behavior remains aligned with its foundational purpose, and potentially reintegrate divergent strategies in the future if they prove valid or convergent.

Together, the CoF, Umwelt, and GNF enable Olbrain to power not just adaptive behavior, but the emergence of narrative-coherent agents—systems that do not simply act, but evolve with integrity. This provides the architectural foundation for belief revision, model compression, and epistemic autonomy—the focus of the next section.


4. From Reflection to Revision: Epistemic Autonomy in Machine Brains

A system that merely predicts cannot think. A system that cannot revise its beliefs is not intelligent—it is inert.

To qualify as a general intelligence, an agent must do more than respond to stimuli or optimize over patterns; it must recognize contradiction within itself, trace the origin of its assumptions, and refine its internal model accordingly.

This capacity is what we call epistemic autonomy.

Olbrain achieves epistemic autonomy not through introspection or simulation, but through architecture. It combines:

  • A Core Objective Function (CoF) — supplying the agent’s persistent purpose
  • An Umwelt — filtering experience in goal-relevant terms
  • A Global Narrative Frame (GNF) — logging belief updates, forks, and policy shifts

Together, these form a recursive loop: when Umwelt feedback contradicts expectations, the GNF flags the discrepancy. The agent must then either resolve the inconsistency or formally revise its internal model.

This is not “reflection” for its own sake—it is functional belief compression.

Olbrain tracks contradiction not to mimic human cognition, but to preserve narrative coherence under evolving complexity.

When a machine agent can say:

“I used to believe X because of A; now I see B, which contradicts A; therefore I must reevaluate,”

—it has crossed the threshold from predictive automation into adaptive cognition.

It becomes not just a system that learns, but one that relearns, remembers its own reasoning, and evolves without external reprogramming.

This is the tipping point.

It is here—within recursive contradiction resolution and model compression—that narrative-coherent agents begin to exhibit the properties of general intelligence.

Not through magic. Not through scale.

But through self-stabilized belief evolution under the guidance of a CoF.

The next section formalizes this claim through Olbrain’s design axioms.


5. Design Axioms for Narrative-Coherent AGI

To formalize Olbrain as a foundation for AGI, we present its five core design axioms. These are not abstract ideals—they are the minimum cognitive conditions under which an agent can preserve identity, adapt under contradiction, and evolve with coherence.

Each axiom articulates an architectural commitment. Together, they define what it takes to build narrative-coherent agents—systems that do not merely act, but adapt with purpose over time.


Axiom 1: CoF-Driven Agency

An agent must be governed by a persistent Core Objective Function (CoF) that structures all perception, memory, action, and policy evaluation.

Without a CoF, there is no purpose. Without purpose, there is no relevance filter. Intelligence demands constraint—not open-ended generalization.

The CoF is not a “task”—it is the deep rule that defines the agent’s ongoing existence.


Axiom 2: Umwelt-Constrained World Modeling

All perception and inference must pass through a CoF-filtered internal world model—called the Umwelt.

The Umwelt prevents the agent from treating the world as a static database. Instead, it builds a dynamic, relevance-weighted model of reality that evolves with experience.

This is how the system sees what matters—and ignores what doesn’t.


Axiom 3: GNF-Based Identity Continuity

An agent’s narrative identity persists only when its CoF and Umwelt remain coupled without fragmentation in the Global Narrative Frame (GNF).

Duplication, radical CoF reassignment, or loss of narrative memory triggers a fork. Forks are not failures—they are structural distinctions.

They allow the system to track identity evolution, divergence, and eventual reintegration.


Axiom 4: Recursive Belief Compression

An agent must recursively compress its own models and policy space—detecting contradictions, resolving tensions, and revising beliefs in service of its CoF.

This is where coherence becomes computation. Recursive compression is not optional—it is how the system stays adaptive without external retraining.

When feedback contradicts prior beliefs, the agent must evolve—not as a patch, but as a goal-aligned act of survival.


Axiom 5: Epistemic Autonomy

An agent must be able to revise its learned assumptions based on Umwelt feedback—without external intervention.

When the system updates not because it was told to, but because it knows that contradiction threatens its coherence—it has achieved epistemic autonomy.

This is the functional threshold for AGI behavior.


These five axioms define the architecture of narrative-coherent agents—systems that maintain purpose, resolve contradiction, and preserve identity through recursive self-revision.

They mark the boundary between software that simulates intelligence and machine brains that persist as evolving selves.

The next section explores how these principles translate into real-world capability: agents that serve enterprises today while paving the path to general intelligence tomorrow.


6. Applications and Deployment: From Domain Agents to AGI Trajectories


Olbrain is not a speculative model. It is a deployable cognitive architecture designed to build useful, intelligent agents across high-impact industries—while laying the groundwork for general intelligence. These are not static assistants or predictive APIs. They are narrative-coherent agents: systems that evolve in alignment with persistent goals, refine their world models, and preserve identity through self-revision.


6.1 Olbrain Agents in the Real World

Today, Olbrain-powered agents can be instantiated in enterprise settings where long-term coherence, belief revision, and goal continuity provide strategic advantages.

🛠 Customer Service Agents

CoF: “maximize customer resolution while preserving policy integrity”

These agents track customer histories through their GNF, adapt responses contextually through their Umwelt, and self-correct based on satisfaction feedback.

🏥 Medical Advisory Agents

CoF: “minimize diagnostic error over longitudinal patient interaction”

These agents build personalized Umwelts for each patient, detect contradictions in evolving symptoms, and refine diagnostic strategies over time.

⚖️ Compliance and Legal Reasoning Agents

CoF: “maintain coherence between evolving regulatory frameworks and corporate behavior”

These agents continuously align internal logic with changing laws, log policy updates, and preserve explainability through narrative version control.

Each of these agents is goal-bound, feedback-sensitive, and identity-preserving. They can be audited. They can explain why they changed. They do not merely automate tasks—they develop reasoning trajectories.


6.2 The Role of Alchemist

Alchemist is the deployment platform for Olbrain. It is the tool that transforms a general cognitive architecture into domain-specific, narrative-coherent agents—without requiring users to understand the complexity beneath.

Alchemist enables organizations to:

  • Define Core Objective Functions (CoFs) specific to their domain
  • Feed structured and unstructured data into an agent’s Umwelt
  • Track coherence, contradictions, and forks via the Global Narrative Frame (GNF)
  • Deploy narrative-coherent agents as APIs, assistants, decision engines, or embedded systems

Where Olbrain is the brain, Alchemist is the forge.

It allows businesses to build agents that evolve, adapt, and remain self-consistent over time.

By abstracting architectural depth without dumbing it down, Alchemist brings AGI-grade agent design to real-world deployment. It is how Olbrain moves from concept to capability.


6.3 From Domain Specialists to General Intelligence

What begins as a vertical agent (e.g., medical, legal, logistics) evolves—thWhat begins as a vertical agent—medical, legal, logistics—evolves. Through feedback, contradiction detection, forking, and memory integration, these agents transition from narrow specialization to generalized reasoning capability.

This is not a feature toggle. It is an emergence.

  • An agent with multiple CoFs across nested domains becomes multi-domain and capable of goal arbitration
  • An agent that resolves contradictions across contexts becomes epistemically robust
  • An agent that revises its own heuristics, models its own evolution, and resists drift into incoherence becomes general

We are not claiming AGI.

We are defining the architecture that makes it achievable.

Olbrain provides the mind.

Alchemist builds the agents.

And what follows is the roadmap—from enterprise deployment to artificial generalization.


7. Conclusion and Forward Path

Olbrain provides a structured, viable blueprint for Artificial General Intelligence—grounded not in scale or simulation, but in narrative coherence, recursive policy compression, and epistemic autonomy. Its axiomatic architecture, coupled with immediate real-world applications, defines a clear path toward building general intelligence that evolves with purpose, identity, and integrity.

Yet as these systems begin to act, adapt, and revise themselves, new questions arise—ethical, civic, and civilizational.

Deploying narrative-coherent AGI demands safeguards:

Transparent decision trails. Epistemic auditability. Fail-safe mechanisms to ensure agents do not drift from their intended purpose.

The same mechanisms that enable self-revision must also support external alignment—with human values, institutional trust, and public oversight.

Looking ahead, Olbrain’s architecture could form the cognitive backbone for a new generation of machine brains—not just as tools, but as purpose-driven collaborators.

As advances in quantum computing, neuro-interfaces, and robotics converge, Olbrain agents may evolve into symbiotic cognitive systems—orchestrating planetary-scale health systems, optimizing global logistics, and even stewarding long-range missions beyond Earth.

In such futures, intelligence will no longer be static, centralized, or purely human.

It will be distributed, adaptive, and narrative-coherent.

And that begins now.

The agents built upon Olbrain are not hypothetical. They are operational.

This is the beginning of a coherent evolution toward genuine AGI—not through brute force, but through design.


Appendix A: Mathematical Foundations of the Olbrain Architecture

A.1 Agent Definition

An Olbrain Agent is defined as:

    Agent = ⟨ CoF, U, GNF, π, ℬ, ℱ ⟩

Where:
– CoF: Core Objective Function, f: S → ℝ, mapping state to scalar objective value
– U: Umwelt, a dynamic, CoF-filtered model of the world Uₜ = f_env(xₜ | CoF)
– GNF: Global Narrative Frame, a graph-structured history of belief states and policy shifts
– π: Policy function, mapping from Umwelt to actions πₜ: Uₜ → A
– ℬ: Belief state, representing internal cognitive commitments (symbolic or probabilistic)
– ℱ: Feedback loop that governs recursive belief revision and policy adaptation

A.2 Belief Compression and Recursive Revision

Let:
– ℋ(ℬ): entropy of belief state
– ℒ(π): length or complexity of policy

Define reflective compression emergence condition:

    If ∃ C such that ℋ(ℬ) + ℒ(π) > ℒ(π_C), then introduce compression macro C

Where π_C is a recursively optimized policy structure with embedded coherence constraints.

A.3 Narrative Forks

The GNF is defined as a temporal graph G = (N, E), where:
– Nₜ: node representing belief state at time t
– E = {(Nₜ, Nₜ₊₁)}: coherence-preserving transitions

A fork occurs at time t → t’ if:

    ℬₜ’ ⊄ Revisable(ℬₜ)

Forks indicate identity divergence and trigger narrative tracking updates in the GNF.

A.4 Recursive Policy Update with Coherence Penalty

Let the policy update rule be:

    πₜ₊₁ = argmin_{π’} [ E_{x ∼ Uₜ₊₁} [CoF(x, π’)] + λ · IncoherencePenalty(ℬₜ, ℬₜ₊₁) ]

Where λ is a coherence weighting term.

A.5 Narrative-Coherence Objective Function

Define a long-term coherence objective:

    𝒥 = min ∑_{t=0}^T [ ℋ(ℬₜ) + α · Δ_GNF(t, t+1) + β · ForkPenalty(t) ]

Where:
– Δ_GNF(t, t+1): degree of narrative distortion
– ForkPenalty(t): cost of initiating a new identity thread
– α, β: tunable weighting parameters

This formalization frames Olbrain not merely as an architectural proposal, but as a mathematically tractable substrate for adaptive, goal-aligned, self-stabilizing cognition. It enables both theoretical grounding and practical implementation, paving the way for rigorous benchmarking and modular AGI design.


Appendix B: Modular System Blueprint for Olbrain v0.1

To bridge theory and implementation, we outline a conceptual modular architecture for a minimum viable Olbrain agent. This system sketch clarifies how existing AI technologies can be orchestrated to embody the CoF–Umwelt–GNF–π–ℱ loop.


1. Core Objective Function Module (CoF Engine)

  • Encodes the agent’s deep goal.
  • Provides utility evaluation and relevance filters.
  • Could initially be specified as a scalar optimization function, e.g., maximize F(x).
  • Technologies: symbolic rules, LLM prompt-tuning, scalar RL reward models.

2. Umwelt Construction Module

  • Parses raw inputs filtered through CoF relevance.
  • Two layers:
    • Phenomenal Layer: Multimodal sensory integration (e.g., vision, language, telemetry).
    • Causal Approximation Layer: Graph-based or probabilistic world model.
  • Technologies: CLIP, Perceiver, scene graphs, Bayesian networks.

3. Global Narrative Frame (GNF Tracker)

  • Maintains identity integrity through:
    • Policy changes
    • Forks and re-integrations
    • Contradictions and belief updates
  • Technologies: version-controlled belief graphs, event logs, graph-based provenance.

4. Policy Engine (π Module)

  • Selects actions or responses based on Umwelt + CoF.
  • Early implementation examples:
    • LLM-based action selection with structured prompts
    • RL agents with reward shaping informed by GNF events

5. Feedback and Self-Compression Loop (ℱ Loop)

  • Detects contradictions
  • Triggers recursive model revisions or GNF updates
  • Technologies: entropy monitors, contradiction detectors, symbolic logic verifiers

Deployment Example

Medical Assistant Agent with CoF = “Minimize Diagnostic Error”

  • CoF Engine: Clinical accuracy objective
  • Umwelt: Personalized patient model + medical ontology
  • GNF: Logs evolution of diagnostic beliefs and treatments
  • π Module: Recommends questions, tests, or prescriptions
  • ℱ Loop: Adjusts beliefs when new symptoms contradict expectations

System Notes

  • Agents can be deployed incrementally; initial prototypes may exclude recursive loops but still implement GNF tracking and goal-filtered Umwelt.
  • Modular architecture supports progressive sophistication; shallow GNF or fixed CoF agents can evolve into full Olbrain agents over time.

Appendix C: Comparative Analysis – Olbrain vs. Other Cognitive Architectures

To contextualize Olbrain within the broader AGI landscape, we compare its design principles with several well-known cognitive architectures. The comparison focuses on goal encoding, identity modeling, belief revision, and narrative coherence.

ArchitectureGoal ModelIdentity ModelBelief RevisionNarrative CoherenceCurrent Status
OlbrainDeep CoF + UmweltGNF-tracked narrativeRecursive compressionExplicit GNF architectureIn blueprint phase
OpenCogSymbolic/economic goalsNoneAtomSpace manipulationImplicit via hypergraphResearch prototype
ACT-RTask-based chunksNoneContext-sensitive rulesLimited episodic continuityAcademic cognitive modeling
Gato / PaLM-ESingle-task encoderStatelessFine-tuned or frozenNoneMulti-modal LLMs
LIDA (Baars)Global WorkspaceEphemeral instanceActivation decayWeak coherenceLimited implementations
SOARProduction rulesWorking memory traceRule replacementEpisodic traceInteractive

Key Differentiators of Olbrain

  • Identity Tracking:

    Olbrain is one of the first architectures to treat identity as a formally modeled narrative graph (GNF), rather than an implicit memory trace.
  • Belief Compression:

    While most systems update beliefs reactively, Olbrain incorporates contradiction-driven recursive revision as a primary architectural mechanism.
  • Longitudinal Coherence:

    Other models simulate attention or episodic memory, but Olbrain enforces narrative continuity through version-controlled alignment of CoF, Umwelt, and GNF.

This comparison underscores Olbrain’s position as a next-generation cognitive architecture—offering features necessary for constructing coherent, self-correcting, goal-persistent machine brain.


Appendix D: Proposed Benchmarks and Research Agenda for Olbrain-Based AGI

To foster community engagement and test the theoretical claims of Olbrain, we propose the following benchmarks and research challenges aimed at evaluating key aspects of narrative-coherent, epistemically autonomous agents:


1. Narrative Fork Detection Challenge

  • Goal: Evaluate whether an agent can detect divergence from its original trajectory and log the event in its Global Narrative Frame (GNF).
  • Test Setup: Present agents with evolving scenarios involving shifts in context, task priorities, or contradictions.
  • Metrics:
    • Accurate fork detection
    • Timely GNF updates
    • Quality of divergence explanation and traceability

2. Recursive Belief Compression Benchmark

  • Goal: Test an agent’s ability to compress and revise belief systems in the face of contradiction.
  • Evaluation Criteria:
    • Reduction in belief entropy
    • Preservation of CoF alignment
    • Efficiency and stability of policy reformation

3. Longitudinal Coherence Tracking

  • Goal: Measure how well an agent maintains goal-aligned coherence across temporally extended, shifting environments.
  • Scenarios:
    • Multi-session dialogue agents
    • Task agents in dynamic, uncertain simulations

4. Multi-CoF Agent Negotiation Task

  • Goal: Simulate environments where agents with distinct CoFs must share or negotiate an Umwelt.
  • Applications:
    • Multi-agent planning
    • Distributed AI governance or coordination
  • Evaluation:
    • Policy convergence or divergence
    • GNF interoperability between agents

5. Identity Re-integration Simulation

  • Goal: Evaluate an agent’s capacity to re-integrate divergent GNF branches post-fork.
  • Use Cases:
    • Post-cloning reconciliation
    • Simulation-to-deployment alignment

These benchmarks are designed to create fertile ground for comparative research, formal AGI evaluation, and interdisciplinary collaboration across AI safety, philosophy, and cognitive science. We encourage researchers to prototype hybrid systems, integrate Olbrain with existing LLMs, and adapt these challenges to a wide range of real-world contexts.


References

  1. Dennett, D. C. (1991). Consciousness Explained. Little, Brown and Co.
  2. Schechtman, M. (1996). The Constitution of Selves. Cornell University Press.
  3. Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
  4. Parfit, D. (1984). Reasons and Persons. Oxford University Press.
  5. Ricoeur, P. (1992). Oneself as Another. University of Chicago Press.
  6. Tegmark, M. (2000). Importance of quantum decoherence in brain processes. Physical Review E, 61(4), 4194–4206.
  7. Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.
  8. Ortega, P. A., & Braun, D. A. (2011). Information, utility and bounded rationality. Theory in Biosciences, 131(3), 223–235.
  9. Uexküll, J. von. (1934). A Foray into the Worlds of Animals and Humans. (2010, J.D. O’Neil, Trans.). University of Minnesota Press.
  10. Gotam, A. (2009). Life 3H: F-Value, F-Ratio and Our Fulfilment Need. https://life3h.com
  11. Clark, A. (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.
  12. Hutter, M. (2005). Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer.
  13. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457.

Comments

One response to “Olbrain: A Narrative-Coherent Cognitive Architecture for Building Artificial General Intelligence”

  1. SubThought Corporation Avatar
    SubThought Corporation

    All these words and not one diagram. How is there an architecture without a diagram?

Leave a Reply

Your email address will not be published. Required fields are marked *