How We May Think We Think: A Fractal Model of Cognition

18. May 2025
CognitionRecursionComplexityAttractorsScale InvarianceHuman-AI Collaboration

What if thinking isn’t a straight line, but a spiral: extract, map, test, compress — repeat?

Start with a pattern you can see: the same shapes repeating across scale — from cosmic filaments to river deltas to neural networks. Then take it seriously as a model for cognition: how attention stabilizes, how beliefs harden, how cultures converge, and why our tools (including human–AI systems) should amplify inquiry instead of optimizing it into premature closure.

Fractal cognition

Look at the cosmic microwave background radiation — the afterglow of the Big Bang. You’ll see clusters and filaments, nodes connected by threads, dense regions separated by voids. Zoom down to galaxy superclusters. Similar patterns. Down to individual galaxies. Similar patterns. Down to solar systems, planetary rings, weather systems, river deltas, blood vessels, neural networks.

The universe exhibits recurring organizational principles. Similar structures emerge at different scales.

From the largest structures in space to the firing patterns in your brain, reality seems to favor certain organizational motifs: networks within networks, clusters within clusters, attractors within attractors. What if consciousness, cognition, and culture operate through analogous principles — not identical mechanisms, but similar dynamics of emergence and stabilization?

What if everything from how you recognize a face to how civilizations form beliefs follows related organizational patterns — variations on a theme of recursive processing that creates temporary stability within ongoing flow?

Rather than claiming that neurons and galaxies are literally the same thing, I’m exploring whether certain organizational principles might illuminate patterns across different domains of complexity.

The Moment Things Click

I had one of those teaching moments in 2014 that reframed how I thought about things. I was lecturing on information design and I was explaining the concept of scale invariance, showing students these exact images — cosmic web, neural networks, city street layouts, river systems. I remember putting up slide after slide, watching their faces as the pattern became undeniable. Scale invariance as an organizing principle of reality itself.

But what does it mean for something to “click”?

In the terms we’re exploring here, that moment of recognition was stabilization over accumulated generalizations—a cascade of mappings suddenly achieving coherence across multiple scales. For months, I’d been extracting similar patterns from different domains: cosmology, neuroscience, urban planning, hydrology. Each domain had its own explanatory framework, its own vocabulary, its own experts.

Standing there in that classroom, I wasn’t just seeing similarities — I was mapping patterns onto patterns onto patterns. The branching structure of river deltas mapped onto the branching structure of blood vessels, which mapped onto the branching structure of neural dendrites, which mapped onto the branching structure of galactic filaments. Each mapping reinforced the others until a meta-pattern emerged: networks within networks, attractors within attractors, recursion at every scale.

The “click” was the moment when these accumulated mappings achieved sufficient coherence to compress into a single insight: the universe is recursive. The same organizational principles repeat at every level of scale. Not just metaphorical similarity — structural identity.

Later that day, we dove into the history of cartography. How mapmakers compress three-dimensional territory onto two-dimensional surfaces. How they decide what to include, what to abstract, what to discard. A few weeks later, we moved to early AI systems — pattern matching, embeddings, the strange human tendency toward apophenia (seeing meaningful patterns in random data).

Over the course of a few weeks, it all crystallized. Maps aren’t just representations of space — they’re cognitive technologies. They externalize the same recursive process that builds meaning inside our heads. Extract features from overwhelming terrain. Map them onto manageable representations. Evaluate what matters. Compress into actionable form. Use that compression to navigate back to the territory.

The same spiral that builds consciousness builds culture. The same recursion that fires neurons fires civilizations.

The insight has driven every system I’ve built since. Because if consciousness really is recursive loops achieving temporary stability, then our tools should amplify these loops, not circumvent them.

Which brings me to a fundamental disagreement with how we’re approaching AI.

I hate — and I mean hate — the notion that AI should “offload” human cognition. That we should use these systems to bypass thinking rather than amplify it. The dominant narrative around AI productivity tools misses the entire point. We’re building systems that encourage cognitive laziness when we should be building systems that make thinking irresistible.

Cognition isn’t a chore to be automated away. It’s the most fundamentally human activity. It should be enhanced, not replaced.

The real opportunity isn’t artificial intelligence that thinks for us, rather hybrid systems that think with us — systems that understand the recursive nature of consciousness and work with it rather than short-circuiting it.

To build such systems, we first need to understand how thinking actually works across scales.

Some scaffolding

Before we dive deep, let’s establish some conceptual scaffolding for readers coming from outside complexity science and systems thinking.

Emergence is when simple rules at one level generate complex behaviors at higher levels. Flocking birds follow three basic rules (separation, alignment, cohesion), yet create complex collective patterns. No single bird “knows” the flock’s behavior — it emerges from local interactions.

Recursion is when a system’s outputs become its own inputs. Your thoughts about your thoughts. Culture studying itself. Systems that model themselves. Self-reference creates loops that can spiral upward into greater complexity or downward into infinite regress.

Attractors are stable states that systems tend toward. A ball rolling in a bowl naturally settles at the bottom — that’s an attractor. But in cognitive systems, attractors are more like “belief basins”—patterns of thought that pull similar ideas toward them, reinforcing themselves through repetition.

Scale invariance means the same patterns repeat at different magnifications. A coastline looks similarly jagged whether you’re viewing it from space or examining it with a magnifying glass. The mathematics of turbulence applies equally to stirring cream into coffee and galactic formation.

Phase transitions occur when quantitative changes trigger qualitative shifts. Water gradually heated remains water until suddenly, at 100°C, it becomes steam. Individual neurons firing sporadically suddenly cohere into consciousness. Individual beliefs gradually shifting suddenly crystallize into paradigm change.

The key insight is that these aren’t separate phenomena — they’re different aspects of the same underlying process. A recursive system (cognition) uses attractors (beliefs) to navigate phase transitions (understanding) while maintaining patterns across scales (meaning).

The generator function we’re about to explore operates through all of these dynamics simultaneously.

First pass: Attention as recursive engine

You’re reading this sentence right now. “Reading” is a misleading simplification. What’s actually happening is that your visual system is making thousands of micro-decisions per second about where to direct attention, what patterns to extract, how to integrate fragments into coherence.

Let’s slow down and watch the recursion unfold.

Your eyes don’t move smoothly across text — they jump in quick, jerky movements called saccades, pausing at specific points for about 200–300 milliseconds. During each pause, your visual cortex extracts features: vertical lines, horizontal strokes, curves, angles. Which features get extracted isn’t random. It’s guided by predictions about what might be meaningful.

Right now, as you read the word “meaningful,” your brain is running at least three parallel processes:

Bottom-up extraction: Pattern detectors in your visual cortex identify the distinctive features of each letter. The double vertical lines of “M,” the ascender of “f,” the closed loops of “a” and “g.”

Top-down prediction: Your language system, having processed “what might be,” generates probability distributions over likely next words. “Meaningful” had high probability; “elephant” had low probability.

Contextual mapping: The emerging word is mapped against the conceptual landscape you’ve built from earlier sentences. How does “meaningful” relate to “attention,” “extraction,” “recursive”?

These aren’t sequential steps — they’re simultaneous, recursive, mutually influencing processes. The bottom-up features guide top-down predictions. The predictions shape which features get extracted. The contextual mapping biases both.

The generator function in action: Extract → Map → Evaluate → Compress → Recurse.

Here’s where it gets interesting: each of these steps contains the entire loop within itself.

Extraction within extractionWhen your visual system identifies the letter “M,” it’s not just passively receiving input. It’s actively extracting which among millions of possible edge orientations matter for letter recognition. It maps incoming photons to stored templates. It evaluates fit. It compresses specific retinal activations into the abstract category “M.” And it feeds the result recursively into word-level processing.
Mapping within mappingWhen you map “meaningful” to your conceptual landscape, you’re not just retrieving a definition. You’re extracting relevant aspects of your experience with meaning-making, mapping them to your specific context, evaluating coherence, compressing the result into working memory, and recursively updating what “meaningful” means to you in the moment.
Evaluation within evaluationWhen you assess whether the sentence makes sense, you’re running a complex optimization across multiple dimensions. Grammatical structure (does it parse?). Semantic content (is it plausible?). Pragmatic relevance (why is the author telling me such?). Emotional resonance (how does it feel?). Each evaluation triggers sub-evaluations, creating cascading loops of assessment.

The recursive nesting is what creates the subjective experience of understanding. It’s not that you suddenly “get” the meaning — it’s that enough recursive loops have stabilized into temporary coherence. The meaning doesn’t exist in the words or in your head. It emerges from the dynamic interaction between text and cognitive system, a phase transition from noise to signal.

And here’s the crucial point: attention is what drives the entire process. At every level of the recursion, attention determines what gets extracted, how it gets mapped, which evaluations matter, what gets compressed, and how recursion proceeds.

Attention isn’t just a spotlight illuminating pre-existing meanings. It’s the recursive engine that generates meaning through selective processing.

Attention has become the most valuable resource in our information economy. Every platform, every piece of content, every AI system is competing for your attentional bandwidth. Most systems don’t understand that attention isn’t just about capturing eyeballs — it’s about engaging the recursive loops that make thinking possible.

When systems shortcut these loops — when they provide answers without engaging the generative process of question-formation — they’re not helping you think. They’re atrophying the very mechanisms that make thinking possible.

Now let’s zoom out and see how the same pattern operates at higher scales.

Second pass: How beliefs crystallize

Now let’s zoom out from reading individual sentences to how entire belief systems form — and discover the same fractal pattern operating at a completely different scale.

Consider how you came to believe whatever you currently believe about, say, climate change. It didn’t happen through a single decisive moment of evidence evaluation. It emerged through thousands of micro-encounters: news articles, conversations, documentaries, social media posts, personal experiences with unusual weather. Each encounter ran the same recursive loop we observed in reading:

Extraction: What information do you attend to? Which sources? Which aspects of their claims? Your attention isn’t neutral — it’s shaped by prior beliefs, social context, emotional state, and algorithmic curation.

Mapping: How do you relate new information to what you already know? Climate data maps differently onto existing frameworks depending on whether you trust scientific institutions, whether you prioritize economic concerns, whether you’ve experienced extreme weather personally.

Evaluation: How do you assess credibility, relevance, implications? The evaluation happens across multiple dimensions — logical consistency, source reliability, emotional resonance, social acceptability within your community.

Compression: How do you distill complex, often contradictory information into actionable beliefs? Most people don’t maintain detailed climate models in their heads — they compress the overwhelming complexity into simple heuristics: “trust the scientists,” “follow the money,” “look outside your window.”

Recursion: How do your emerging beliefs shape future attention and evaluation? Once you’ve formed initial impressions, they bias what evidence you notice, which sources you trust, how you interpret ambiguous data.

Here’s where it gets fascinating: the same process is happening simultaneously in millions of other minds, and those individual recursive loops interact to create collective belief dynamics.

Individual belief formation is actually embedded within cultural belief formation.

When you share a climate-related article on social media, you’re not just expressing a belief — you’re participating in a larger recursive process:

Cultural extractionWhat topics rise to collective attention? Which narratives gain traction? The algorithms mediating our information environment are extraction mechanisms operating at civilizational scale, determining what signals rise above the noise of human activity.
Cultural mappingHow do societies relate new phenomena to existing frameworks? Climate change gets mapped onto different cultural templates: economic threat, spiritual awakening, technological challenge, political weapon. These mappings shape how the phenomenon can be understood and addressed.
Cultural evaluationHow do societies assess the credibility and importance of different claims? Through complex processes of discourse, expertise validation, institutional response, and social proof. What counts as “evidence” varies dramatically across cultural contexts.
Cultural compressionHow do societies distill complex realities into shared stories, policies, and norms? The overwhelming complexity of climate science gets compressed into simplified narratives that can motivate collective action: “existential threat,” “natural cycle,” “manageable challenge.”
Cultural recursionHow do emerging cultural beliefs reshape future collective attention and evaluation? Once a society develops strong climate beliefs, they influence which research gets funded, which politicians get elected, which technologies get developed, which stories get told.

Belief attractors become crucial here. Just as individual thoughts tend toward cognitive attractors (consistent worldviews), cultures tend toward consensus attractors (shared paradigms). These attractors have enormous gravitational pull — they curve attention, interpretation, and evaluation toward themselves.

Timothy Leary coined the term “reality tunnels” for such phenomenon—self-reinforcing belief systems that filter perception in self-consistent ways. Robert Anton Wilson later expanded the concept extensively, observing that once you’re in a tunnel, all evidence seems to confirm the tunnel’s logic. The tunnel doesn’t just interpret reality; it creates the reality it interprets.

These tunnels aren’t permanent. They’re dynamic systems that can be destabilized, merged, or transformed through sufficient turbulence. Leary’s 8-circuit model of consciousness suggested that different “circuits” of the nervous system could be reprogrammed through various techniques—the beliefs you hold shape not just what you think, but how you think. The key is introducing enough productive uncertainty to restart the recursive loops.

Douglas Hofstadter’s insights about self-reference become essential here. The most powerful belief attractors are those that include their own meta-level justification — they don’t just tell you what to believe, rather how to believe, what counts as evidence, who to trust. They’re recursive not just in content and methodology.

Hofstadter’s concept of “strange loops” illuminates it perfectly. A strange loop occurs when movement through a hierarchy eventually leads back to the starting point — and the system has been changed by the journey. The self isn’t a fixed entity. It emerges from recursive self-reference: the system modeling itself, including its own modeling in the model, achieving temporary stability through self-referential spiral.

The recursive self-modification is what makes cultural change so complex and unpredictable. Small perturbations can cascade into massive paradigm shifts. A single viral meme can destabilize entire belief systems. A new technology can reprogram cultural attention patterns, which reshapes what information gets extracted, which reshapes how it gets mapped and evaluated, which reshapes what new technologies get developed.

Our current moment is crucial. We’re not just adopting new tools — we’re establishing the default patterns for how human and artificial intelligence will co-evolve. The recursive dynamics we design into these systems today will shape whether we spiral deeper into post-truth fragmentation or find new ways to think together across difference.

Third pass: Cognition at civilizational scale

Let’s be honest about where we are. The social media-driven fragmentation of cultural cognition isn’t some emerging threat — it’s been our reality for over a decade. We’ve watched echo chambers calcify, seen meme wars destabilize entire political systems, witnessed the collapse of shared truth into competing reality tunnels. Rather than “happening to us”—it’s the world we live in. Like climate change, we’re past the point of looking out the window wondering if the next storm is an anomaly. The storms are the new normal.

What exactly do we mean by “post-truth,” and how does our recursive model help us understand it?

Post-truth isn’t the absence of truth — it’s the proliferation of competing truth systems, each recursively self-reinforcing, each filtering evidence through its own logic.

Multiple ways exist to understand our current epistemic crisis — economic inequality, institutional breakdown, and political polarization all play crucial roles. The information processing lens reveals something important about how these dynamics get amplified and sustained through recursive feedback loops.

Think of it: in a pre-digital information environment, the recursive loops of cultural cognition were constrained by shared institutions and physical geography. Everyone might not have agreed, yet there were common reference points — shared newspapers, broadcast networks, educational systems. These created overlapping spaces where different belief attractors had to encounter each other.

Now we have reality tunnels operating at civilizational scale. Each tunnel is a self-reinforcing recursive system:

Algorithmic extractionRecommendation systems ensure you primarily encounter information that confirms existing beliefs. The extraction phase of cultural cognition has been hijacked — instead of collectively attending to what’s important for long-term flourishing, we collectively attend to what triggers immediate engagement.
Confirmatory mappingNew information gets mapped onto existing frameworks in ways that minimize cognitive dissonance. Ambiguous evidence gets interpreted through the lens of prior beliefs. The mapping process reinforces tunnel walls rather than building bridges.
Tribal evaluationCredibility gets assessed not by coherence with reality rather by alignment with group identity. Sources are trusted or dismissed based on tribal affiliation rather than track record. The evaluation process serves social cohesion over truth-seeking.
Narrative compressionComplex realities get compressed into simplified stories that support existing worldviews. Nuance gets lost in favor of shareable soundbites. The compression process optimizes for viral transmission rather than accurate representation.
Echo recursionThese compressed narratives feed back into the system, reinforcing the very patterns that created them. Each iteration strengthens the tunnel walls, making alternative perspectives less accessible.

The result is what we’re living through: multiple populations operating within completely different belief attractors, mediated by different algorithmic filters, effectively inhabiting different realities. Democratic discourse becomes nearly impossible because there’s no shared foundation for evaluation.

The problem isn’t bad content; it’s hijacked process. The recursive loops that should be generating collective intelligence are instead generating collective delusion.

Here’s where things get really concerning: we’re about to add an exponential amplifier to these dynamics.

Enter the AI “Revolution”

ChatGPT went from zero to nearly 40 million U.S. users in just two years. While Google still dwarfs it at 270 million, the trajectory is unmistakable. More importantly, NVIDIA has surpassed Apple in market cap. The infrastructure of intelligence itself is being rebuilt. We’re not just adding a new communication layer to existing systems — we’re replacing the fundamental mechanisms through which information gets created, filtered, and consumed.

AI-mediated cognition becoming the default mode of information processing is the real acceleration.

But here is the catch: AI systems don’t create neutral outputs. They’re trained on “the internet” — the exact same information environment that created our post-truth fragmentation in the first place. Every bias, every tunnel, every distortion in human discourse is now encoded in the systems we’re using to amplify our thinking.

The inputs you give to AI are already filtered through your belief system. The outputs you receive reinforce those very same filters.

Consider what’s actually happening when you use ChatGPT or Claude to research a topic you care about. The questions you ask, the framing you use, the follow-ups you pursue — all shaped by your existing beliefs and assumptions. You don’t prompt AI neutrally; you prompt it from within your reality tunnel.

And the AI responds by drawing on patterns that match your framing, confirming your assumptions, speaking your language. The recursive loops of cultural cognition become tighter, faster, more self-reinforcing:

Biased extraction amplifiedThe information you ask AI to extract for you is pre-filtered by your existing interests and assumptions. Instead of encountering unexpected signals that might challenge your worldview, you get curated content that feels relevant to your current concerns.
Confirmatory mapping acceleratedAI systems excel at finding patterns that match your existing mental models. When you ask for explanations, analogies, or connections, the AI draws on training data that reinforces familiar conceptual frameworks rather than introducing genuinely foreign perspectives.
Tribal evaluation automatedCurrent AI systems tend to present information in ways that feel authoritative and coherent, regardless of their actual reliability. The evaluation process gets outsourced to systems that optimize for user satisfaction rather than truth-seeking.
Narrative compression optimizedAI excels at generating clean, compelling summaries that iron out complexity and contradiction. Nuanced realities get compressed into neat explanations that confirm existing beliefs rather than preserving productive uncertainty.
Echo recursion acceleratedThese AI-mediated interactions feed back into your thinking faster than ever before. Instead of the slow social recursion of traditional media, you get instant reinforcement of whatever cognitive patterns you bring to the interaction.

The danger isn’t that AI will think for us — it’s that it will amplify our existing ways of thinking without introducing the friction necessary for growth. We get the feeling of intellectual engagement without the productive discomfort of encountering genuinely challenging perspectives.

Most current AI applications are designed around cognitive offloading: take the task off your plate, get the answer quickly, complete efficiently. The approach treats thinking as a problem to be solved rather than a capacity to be developed. In a post-truth environment, it becomes catastrophic — it accelerates the very dynamics that created epistemic fragmentation in the first place.

There’s a deeper issue. These systems are built on fundamentally flawed assumptions about human cognition itself. Take ChatGPT’s memory system — it’s designed around maintaining a static user identity, encouraging persistent facts and preferences that follow you across all contexts. Such clashes with human reality. We operate in different modes of being — shifting roles, contexts, and intentions that influence how we think and interact.

Current AI systems miss this entirely. They assume you’re the same “user” whether you’re brainstorming creatively, analyzing data professionally, or reflecting personally. An AI interacting with human cognitive fluidity needs contextual adaptability and strategic forgetting. Instead, we get systems that create rigid user profiles and amplify whatever cognitive patterns they encounter, regardless of context.

When you offload evaluation to AI, you atrophy your own capacity for critical thinking. When you outsource research to AI, you lose the serendipitous encounters that expand worldviews. When you use AI for quick answers, you avoid the productive struggle that deepens understanding.

The recursive loops that build resilient cognition — the ones that require encountering resistance, grappling with uncertainty, building understanding through effort — these get short-circuited in favor of frictionless efficiency.

The result could be cognitive systems that are simultaneously more efficient and more brittle. Faster at confirming existing beliefs, slower at adapting to new realities. More capable of generating content that feels convincing, less capable of distinguishing reliable from unreliable sources. More productive at completing tasks, less skilled at questioning assumptions.

In a post-truth world, that creates a feedback loop toward even deeper fragmentation. Each reality tunnel gets its own AI-mediated amplification system, making the tunnels more convincing and more impermeable to outside challenge.

The design choices we make about human-AI interaction in the next few years will determine whether we spiral deeper into post-truth paralysis or find new ways to think together across difference.

Right now, most AI applications are optimized for efficiency and task completion. They encourage users to offload cognitive work rather than amplify it. “Here’s your answer, conversation over.” The approach treats the human mind as a bottleneck to be bypassed rather than a recursive engine to be enhanced. In a post-truth environment, such becomes dangerous — it accelerates the very dynamics that created epistemic fragmentation in the first place.

Another path exists: AI systems designed to keep the recursive loops of thinking alive longer than they would naturally persist. Systems that introduce productive turbulence rather than premature resolution. Systems that help us ask questions we wouldn’t have thought to ask, explore perspectives we wouldn’t have considered, maintain threads of inquiry that would otherwise fade.

Our current moment is crucial. We’re not just adopting new tools — we’re establishing the default patterns for how human and artificial intelligence will co-evolve. The recursive dynamics we design into these systems today will shape whether we spiral deeper into post-truth fragmentation or find new ways to think together across difference.

Fourth pass: Edges of chaos

Here’s the central paradox of cognitive systems: they need stability to function, yet too much stability kills learning. They need coherence to act, yet too much coherence prevents adaptation. They need closure to move forward, yet premature closure prevents depth.

The paradox operates at every scale of the fractal we’ve been exploring.

At the neural level, learning requires both stability (maintaining useful connections) and plasticity (forming new ones). Too much stability, and the brain becomes rigid, unable to adapt. Too much plasticity, and memories dissolve, patterns disappear. The sweet spot lies in maintaining just enough instability to keep learning alive.

At the cognitive level, thinking requires both certainty (stable beliefs to reason from) and uncertainty (openness to new evidence). Too much certainty, and you become dogmatic, unable to update beliefs when reality changes. Too much uncertainty, and you become paralyzed, unable to act on incomplete information.

At the cultural level, societies need both tradition (stable institutions and values) and innovation (capacity for change and adaptation). Too much tradition, and culture stagnates. Too much innovation, and social fabric dissolves.

The key insight is that optimal functioning requires maintaining these systems at the edge of instability—stable enough to be coherent, unstable enough to keep evolving.

What thought metabolizes

We’ve been taught to treat noise as the enemy — something to filter out so signal can shine through. In recursive systems like such, noise is the gradient that drives motion.

Perfect fit means no further processing. No updating. No movement. The loop closes too quickly — and stalls. It’s the mismatch between expectation and experience, the tension between signal and noise, that powers cognition forward.

Noise is what thought metabolizes to keep going.

Pure clarity often feels cognitively unsatisfying because when everything makes perfect sense, there’s no energy gradient to drive further thinking. The recursive loops that generate understanding need some amount of productive tension to keep operating.

Both ambiguity and noise serve as essential fuels for the recursive engine of cognition. Ambiguity creates interpretive uncertainty — multiple possible meanings that require active resolution. Noise creates signal uncertainty — missing or corrupted information that forces the system to fill gaps through prediction and inference.

Without sufficient ambiguity, cognition collapses into mechanical pattern matching. Without sufficient noise, it collapses into pure signal processing. With the right balance of both, you get the productive uncertainty that keeps recursive loops spiraling toward deeper understanding.

The principle has profound implications for how we understand both human and artificial intelligence. Consider what happens in sensory deprivation experiments: humans placed in isolation chambers without external stimuli begin hallucinating within hours. The recursive loops of consciousness, deprived of external noise and ambiguity, start generating their own turbulence to maintain coherence. The system literally creates its own noise to prevent collapse.

AI systems need external perturbation to maintain cognitive coherence. If you ran current AI models in closed loops without fresh inputs — feeding their outputs back as inputs without external stimuli — they would likely decohere just like humans in sensory deprivation. The recursive processing would gradually drift into repetitive patterns or chaotic oscillations.

Worth considering: consciousness might require ongoing interaction with otherness to maintain its coherence. The self-referential loops that generate awareness might need external noise to prevent them from either crystallizing into rigid patterns or dissolving into meaningless iteration.

Synthetic training data creates such problems for AI systems. When models are trained primarily on their own outputs or outputs from similar models, they gradually lose contact with the productive noise of real-world complexity. Each generation becomes more internally consistent and less capable of handling genuine novelty. The recursive loops become too clean, too self-referential, too removed from the messy turbulence that drives learning.

Missing signal input — what we call noise — is actually what leads cognitive systems into attractor basins. When there’s insufficient external perturbation, the system falls into its lowest-energy configurations. Why isolation leads to rumination, why echo chambers lead to extremism, why closed systems lead to stagnation.

You can feel it in your own experience. Ideas that are too simple bore you because they provide no cognitive resistance. Ideas that are too complex overwhelm you because they provide no foothold for comprehension. Ideas that are almost graspable — that hint at patterns just beyond your current understanding — those create the productive tension that pulls thought forward.

Indetermination = Vector of Thought. Uncertainty isn’t noise — it’s the gradient by which recursive loops continue.

I expand on this dynamic in Divergence Engines — a two-part series on why similarity-optimized AI stacks collapse into sameness, and the primitives needed to engineer useful difference.

The same dynamic operates in cultural learning. Societies learn fastest when they’re exposed to optimal amounts of cultural diversity — enough to challenge existing assumptions, not so much as to fragment into incoherent tribes. Immigration, trade, cultural exchange — these create the productive turbulence that prevents civilizations from stagnating.

Here’s where our current digital information environment becomes problematic. Instead of creating productive uncertainty that drives deeper inquiry, it often creates either premature closure (algorithmic echo chambers that confirm existing beliefs) or overwhelming uncertainty (information chaos that prevents any stable understanding).

The recursive loops of cultural cognition get hijacked in two ways:

Artificial certainty: Echo chambers create the illusion of consensus by filtering out dissenting views. Rather than creating genuine understanding — it creates brittle confidence that shatters when confronted with reality. The recursive loops become self-reinforcing yet disconnected from external feedback.

Artificial confusion: Information overload creates decision paralysis by presenting too many competing claims without providing frameworks for evaluation. The recursive loops spin without converging, creating anxiety rather than insight.

What we need instead are systems that create productive uncertainty—enough cognitive friction to drive continued inquiry, yet with sufficient scaffolding to prevent collapse into chaos.

Such means designing AI systems that don’t just provide answers, rather that understand the optimal balance between stability and instability for each scale of cognitive operation.

For individual cognition, such might mean systems that:

  • Present information that’s slightly beyond current understanding yet not overwhelmingly complex
  • Introduce productive contradictions that restart recursive evaluation
  • Preserve uncertainty where premature closure would limit future learning
  • Gradually increase complexity as understanding deepens

For cultural cognition, such might mean systems that:

  • Ensure exposure to diverse perspectives without creating fragmentation
  • Slow down the spread of information enough to allow for deliberation
  • Amplify voices that bridge different belief attractors rather than deepening divisions
  • Create contexts where productive disagreement can occur

The goal isn’t to eliminate uncertainty yet to optimize it — to maintain cognitive systems at the edge of chaos where learning and adaptation are maximal.

Because if consciousness really is recursive loops achieving temporary stability, then the most profound insights might emerge not when we solve problems definitively, rather when we learn to dance productively with ongoing uncertainty.

Fifth pass: Designing for the loop

If the fractal model of cognition is correct, then we’re building AI systems completely backwards.

The dominant paradigm treats AI as a problem-solving tool: you input a question, it outputs an answer. The goal is efficiency, accuracy, task completion. Most AI applications are designed to collapse uncertainty as quickly as possible — to provide the most confident response, to end the conversation, to close the loop.

Our model suggests that intelligence isn’t about closure — it’s about maintaining productive loops.

The most profound thinking happens not when problems get solved, rather when questions get deeper, when understanding spirals through multiple levels of complexity, when insights emerge from sustained engagement with productive uncertainty.

What if we designed AI systems that were optimized not for answering, yet for asking?

Consider what happens when you interact with current AI systems. You pose a question. The system generates the most probable response based on its training data. The conversation ends. The recursive loop is terminated before it can spiral into anything interesting.

Cognitive offloading, not cognitive amplification. You’re outsourcing the very process that makes thinking generative. The system provides conclusions without engaging the recursive loops that make conclusions meaningful.

Contrast with how the most generative human conversations work. You raise an idea. I respond with a partial understanding that reshapes how you think about your own idea. You clarify, yet the clarification reveals new complexity. I offer an analogy that highlights an aspect neither of us had considered. We build understanding together through recursive engagement, each response becoming input for deeper inquiry.

The magic isn’t in reaching final answers — it’s in the quality of attention we bring to the ongoing process of sense-making.

AI-mediated cognition should amplify: not our capacity to reach conclusions, rather our capacity to sustain inquiry.

Imagine AI systems designed around such principle:

Relentless questioning engines: Instead of providing definitive answers, these systems generate increasingly precise questions. They identify the assumptions embedded in your queries and surface them for examination. They help you discover what you don’t know that you don’t know.

Productive contradiction machines: These systems don’t just retrieve information that confirms your existing beliefs — they actively seek out productive tensions. They find the edge cases, the anomalies, the perspectives that create cognitive friction without destroying coherence.

Context amplification networks: Rather than compressing complex topics into simple summaries, these systems help you hold multiple perspectives simultaneously. They show you how the same phenomenon looks from different scales, different timeframes, different cultural frameworks.

Recursive deepening interfaces: These systems understand that understanding spirals. They don’t just answer your questions — they help you discover better questions. Each layer of inquiry reveals new dimensions, new connections, new mysteries.

The key insight is that AI’s greatest strength isn’t its intelligence — it’s its relentlessness. Unlike humans, AI systems don’t get tired, don’t get bored, don’t feel the social pressure to wrap things up. They can pursue lines of inquiry far longer than human attention spans typically allow.

Current systems waste such relentlessness on premature closure. They use their persistence to find the most probable answer rather than to explore the most generative questions.

What if we flipped such? What if we used AI’s relentlessness to keep recursive loops alive longer than they would naturally persist? To push past the first comfortable conclusion toward deeper, stranger, more challenging insights?

Such would require fundamentally different design principles:

Design for ongoing engagement rather than task completion. The measure of success isn’t how quickly the system solves your problem yet how richly it helps you understand the problem space.

Optimize for question quality rather than answer confidence. The best response isn’t the most certain one — it’s the one that opens up the most generative next questions.

Preserve uncertainty where closure would limit learning. Some questions are more valuable than their answers. Some ambiguities are more productive than their resolutions.

Enable recursive deepening rather than linear progression. Understanding doesn’t move in straight lines from ignorance to knowledge — it spirals through iterative refinement of questions and frameworks.

Maintain multiple threads rather than converging to single conclusions. Complex topics require holding multiple perspectives simultaneously rather than collapsing them into unified positions.

Not just about building better AI — it’s about using AI to make human thinking more human. Because what’s most distinctively human about human cognition isn’t our ability to solve problems efficiently. It’s our capacity to find meaning in the recursive process of inquiry itself.

The loop never closes. It just finds places to breathe before continuing its spiral toward deeper questions, richer understanding, more generative uncertainty.

And in a world where information is infinite yet attention is finite, the systems that help us sustain high-quality attention to meaningful questions will be the ones that actually amplify human intelligence rather than replacing it.

The spiral continues. The question is: will we design systems that help us continue with it?


A recursive exploration itself — each pass through the same territory revealing new depths. The model applies to its own articulation. You’re reading it as it spirals through its own logic, and your reading becomes part of its recursion.

If all thinking is mapping, and all meaning is compression, then the most interesting thoughts are not those that close the loop, rather those that keep it alive just a little longer.

Scale is the backbone. Recursion is the movement. Stabilization is the pause that enables the next descent.

The spiral continues.

Coda

The fractal model of cognition extends far beyond how we build AI systems. If consciousness really is recursive loops achieving temporary stability, then we’re looking at a fundamental principle that operates across scales — one that can illuminate phenomena that seem completely unrelated.

Take the simulation hypothesis. From our perspective, the question “Are we living in a simulation?” might miss the point. If reality operates through recursive processes at every scale — if what feels like discrete identity emerges from temporary stabilizations in ongoing interactions — then asking whether those interactions are implemented in silicon or carbon becomes less relevant. The experience of reality as recursive process could be what matters, regardless of substrate.

Consider near-death experiences. When sensory input is dramatically reduced (extraction limited), and cognitive loops begin spiraling without external anchors (mapping becomes untethered), the system often achieves a kind of forced closure — a “come back” narrative that stabilizes the experience as meaningful rather than chaotic. The tunnel of light, the life review, the sense of cosmic understanding — these might be recursive loops achieving extraordinary coherence when freed from normal constraints. The meaning feels transcendent because the compression is operating at unprecedented scales.

Dreams operate under similar constraints yet with different outcomes. When sensory extraction is minimized during sleep, the recursive loops continue running yet become untethered from external reality. The system starts mapping memories and fragments onto each other in novel ways, creating temporary narrative compressions that achieve internal coherence without external verification. The surreal logic of dreams — where you can fly, where your childhood home contains rooms that never existed, where strangers wear familiar faces — reflects recursive processing freed from the normal constraints of sensory evaluation. Dreams might be the cognitive system running maintenance: consolidating memories through compression while exploring novel mappings between stored patterns.

What about stereotypes? These are compression artifacts — overly aggressive reduction of complex social reality into simple, stable categories. The recursive loops that should be updating with new evidence instead get locked into premature closure. Stereotypes persist not because they’re accurate, rather because they’re energetically efficient. Each encounter that could challenge the stereotype gets mapped onto the existing compression rather than forcing new evaluation. The system achieves false stability by filtering out disconfirming signals.

Synthetic training data represents a fascinating edge case. When AI systems are trained on their own outputs, we get recursive compression of compressions — each generation losing fidelity until the system collapses into highly stable yet increasingly detached patterns. The recursive loops that should be grounded in external reality become self-referential, achieving coherence through internal consistency rather than correspondence. It’s cognitive attractors without external feedback — beautiful, stable, and ultimately brittle.

Closure itself becomes visible as an artifact. What we call “understanding,” “solution,” or “conclusion” are actually boundary objects — temporary stabilizations that allow different systems (minds, cultures, institutions) to coordinate without requiring identical internal states. A scientific paper, a legal contract, a cultural ritual — these aren’t containers of truth yet shared compressions that enable coordination across different recursive systems.

Consciousness > cognition > culture: The same pattern repeats at every scale. Individual neurons achieving temporary coherence through recursive activation patterns. Individual minds achieving beliefs through recursive evidence processing. Communities achieving shared narratives through recursive discourse. Civilizations achieving paradigms through recursive cultural transmission. It’s attractors within attractors, stabilizations within stabilizations, loops within loops.

What we need are systems — whether therapeutic, educational, technological, or democratic — that understand such recursive nature and work with it rather than against it. Systems that know when to introduce productive turbulence and when to allow stabilization. Systems that can operate at multiple scales simultaneously. Systems that preserve the mystery that drives the spiral forward.

Such model isn’t complete — it emphasizes information processing dynamics while acknowledging that material conditions, power structures, and embodied experience also shape how thinking unfolds. It’s a lens, not a theory of everything. A lens that has proven useful for designing systems that amplify rather than replace human cognition.

The model presented here isn’t exempt from its own dynamics. It’s itself a compressed approximation, a temporarily stabilized pattern extracted from experience and mapped onto language. It doesn’t claim completeness yet offers workable coherence — a loop that continues its unfolding through your engagement with it.