Too Much to Process (Transcript)

June 2019
TalkTranscriptKnowledge NavigationComplexity

In a hyper-networked world under algorithmic influence, how can we navigate the complex digital landscapes that shape our understanding of reality? This talk explores the challenges of knowledge management in the networked age and proposes new metaphors and approaches for making sense of our increasingly entangled information ecosystems.

This talk was held in summer 2019 as part of the re:publica conference in Berlin. The theme was “tl;dr” — dedicated to long-form content, detailed examination, and rigorous research.

Video

Prefer watching? The video recording is available on the main talk page.


Alright, I’ll dive right in — the intro was already very kind.

I want to talk about knowledge. More specifically, I want to talk about how we manage knowledge, and by the end, shift our focus toward the metaphors we use when we do that.

I’m a communication designer by background, so I tend to think metaphorically — using analogies and imagery to grasp and explain abstract ideas. This year’s re:publica theme fits perfectly, because we are, without doubt, living in complex times. Things don’t have just one angle. We need tools that allow us to see from multiple perspectives.

Some of the questions I’ve been asking myself lately:

  • How can we better share abstract mental models?
  • How can design and technology help us make sense of our reality?
  • How do we retrieve, rediscover, or explore information more effectively?
  • And how do we detect patterns in noise — those underlying structures that help us challenge and enrich our worldview?

Mental Filters

One idea that’s helped me here is the concept of mental filters.

Imagine you’re on a medieval sea voyage. There’s this legendary object — kind of mythic, maybe real, maybe not — called a sunstone. Supposedly, you could hold it up against the cloudy sky and still make out where the sun was. You could orient yourself.

To me, mental filters are like that. They’re conceptual tools — lenses we install into our bio-hardware — that help us see more clearly. Interpretive aids.

One of the first filters I ever got installed came from these two guys — speakers from early Chaos Computer Club conferences. They talked about truth and what “really happened.” And they introduced a distinction I still find essential:

  • There’s your story.
  • There’s my story.
  • Then there’s the truth.
  • And then… there’s what actually happened.

That distinction — between “truth” and “what happened” — is crucial. Because we only ever operate on what we perceive. There’s this incoming data stream, and our brains break it down. We extract shapes, colors, patterns — say, an apple: it’s roundish, red, maybe has a stem.

We fit that into schemas. We associate it with other things we’ve seen. We embed it into our mental framework.

Compression and Symbols

The cool thing is: we can compress all of that into a symbol. That’s what we do all the time — we engineer meaning and squash it into something communicable. But compression comes at a cost. You lose fidelity. Just like when we compress images, we group pixels together, and the overall image remains readable — but it’s not the same.

This happens at every level. From raw perception to shared knowledge: it’s always a compression. We abstract, and then we lose nuance.

Still, the advantage is: we can communicate. We turn these concepts into signs, into language. But communication isn’t binary. It’s not: “I say X, you hear X.” It’s a gradient. Sometimes it works. Sometimes it doesn’t. It depends on context:

  • What do you already know?
  • What have your life experiences taught you to expect?
  • How are your mental structures wired?

Interpretation is always contextual. Reality is, arguably, constructed — or at least subjectively filtered. There’s a whole branch of science and philosophy that believes reality itself is a mental construct. Donald Hoffman, for instance, says: “Our perceptions are gambles.” What we experience is not reality itself, but our best guess at it.

That’s manageable — within small groups. But the moment it scales to large, connected networks, things get gnarly.

The Problem of Scale

And here comes the problem: a lot of our knowledge formation today happens socially and digitally. That means we’re operating within increasingly algorithmically shaped perception environments.

Bruce Sterling had a great metaphor for this. He called it the “consensus narrative.” Basically: in large groups, there’s always a dominant version of events. Not because it’s true. Not because it’s what happened. But because it was easiest to agree on. It had narrative traction. And — crucially — it was easy to tell.

Stories that survive aren’t the most accurate. They’re the most tellable. The ones that reinforce what we already believe. That’s how social reinforcement works.

That was around the year 2000. Things have only gotten more complex since.

Why? Because the actors involved aren’t the same anymore.

Modern Complexities

Today we have new players with very specific interests — commercial, political, ideological. And there’s a whole lot of research on how these actors engineer digital environments to capture and hold attention.

Take the “bottomless bowl” study — if you give people a soup bowl that secretly refills itself, they just keep eating. Same with feeds. Instagram scrolls forever. Two-thirds of views on YouTube happen through autoplay.

These platforms are built around algorithms that decide what you see. They know what holds your attention — probably better than you do. And they reinforce your existing worldview.

Add bots, political ops, recommendation engines that swap video frames to evade detection — and you get massive asymmetries in opinion shaping. Some actors have tools and reach that individuals simply don’t.

All of this distorts what gets heard — and what gets buried.

Now, I know this is all a bit abstract — and simplified — but the scale and entanglement of these systems make it hard to cut through. It’s not just about “bad actors”; it’s about structural asymmetry.

Belief Attractors

Joscha Bach has a great metaphor here too: belief attractors. Imagine them like gravity wells. Maybe it’s religion, maybe ideology, maybe nationalism — whatever. The key idea is: once you’re inside the gravity well, it takes real energy to escape.

And often, we don’t want to. We’re happy inside. We feel validated. It’s hard to integrate opposing views.

But if we want to make individual decisions make sense — to trace why people believe what they do — we need tools that help us reconstruct those paths of thought.

Changing the Metaphors

Which brings me to the second half of this talk: What could we actually do?

I think we need to change the metaphors we use when we design interfaces.

Most current systems still operate with metaphors from a different era — an institutional, hierarchical era. Not a networked one.

Take this photo: it’s not about the vintage computer. It’s about the filing cabinets behind it. That’s what the computer replaced — the same logic of folders, drawers, hierarchy. Even today, most of our digital tools are just deeper filing cabinets.

Even newer tools that claim to break away from this model still often operate within it.

But others saw this coming. Douglas Engelbart in the 60s already argued that simulating paper on screens was a dead end. He wanted cross-referencing systems — early hypertext — that mirrored how we actually think.

Networks All the Way Down

Because if you look closely — zoom in or out on almost anything — what you find are networks. Galactic filaments. Neurons. Internet maps. Fungal networks. Social graphs. It’s all structure, all the way down. Fractal. Interconnected. Hierarchy is just a tiny slice we isolate from the larger network.

William Gibson had a great quote about this too — he coined “cyberspace” — and said:

“Cyberspace consists of clusters and constellations of data… that look like cities.”

Information is spatial. Not metaphorically. Literally.

When we read a book, we don’t absorb the whole thing. We pick out a chapter. A paragraph. A sentence. Even a word. What we need is the ability to link from that single word to other concepts — across books, across media, across systems.

That’s not how current systems work. They don’t allow lateral jumps. They don’t support associative exploration.

But information always lives in spatial relations. We should be able to map it that way — literally. And while we’re seeing early explorations of spatial UIs, they haven’t yet made it into everyday tools. We’re still stuck with desktops, folders, and trees.

The Visualization Challenge

One challenge is: when you’re dealing with networks this complex — tens of thousands of nodes — it becomes impossible to “see” the whole. So we abstract. We reduce everything to dots and clusters. But that hits a wall.

Color-coding or point clouds help only so far. You can reduce complication — but not complexity. That’s a key difference.

This is where machine learning actually becomes interesting again. Not to feed us more, but to restructure how we explore.

ML can project concepts — words, ideas — into vector space. Then you can cluster those, and render the clusters spatially. And humans are very good at recognizing spatial patterns.

Knowledge as Territory

So here’s the punchline: what if we treated our knowledge systems as maps?

What if knowledge became territory — navigable, interconnected, zoomable? A personal knowledge archipelago. An emergent landscape.

Earlier this week, someone brought up the idea of “interface flattening” — this tendency of modern UIs to remove dimensionality. She argued we should become “trackers of our own traces.”

If things get difficult, space becomes our refuge.

And maybe, as Alan Kay said:

“Don’t teach children the truth. Teach them how to take interesting paths.”

We need to bring that exploratory character back into our tools. That’s how we extend mutual understanding — by expanding the spaces we’re allowed to move through.

Thanks for listening.