The Vibe Shift: Moving towards a Post-Loyalty Digital Era
The era of strong platform allegiance is crumbling. Users now shrug at brand loyalty, prioritizing immediate functionality and personalization over allegiance — choosing tools that adapt to their needs. The shift toward ‘interface agnosticism’ is accelerating, especially in AI, where selection hinges less on checklists and more on the ‘vibe’ or personality of the model. The cracks are showing everywhere in how we interact with and delegate tasks to technology. The old rituals of loyalty are still performed, but the audience has mostly left the theater.
We’re witnessing a fundamental shift in how people relate to technology: Platform loyalty — once a point of pride for users — is losing its grip, sometimes quietly, sometimes with a bit of a crash. The era of brand allegiance is ending as users grab whatever tool fits the moment, not out of loyalty, but out of a kind of pragmatic fatigue. This shift is driving demand for more portable, accessible experiences and pushing us toward more natural and fluid ways of interacting with digital systems.
End of platform lock-in
End of platform lock-in
I’ve been using Telegram as my primary note-taking app for years. Not because it was designed for that purpose but because it offers something most dedicated note-taking apps don’t: cross-platform availability and the freedom to not organize anything at all. I just dump thought fragments into a channel and reply to previous messages when I want to elaborate. No tags, no folders, no complex organization methodologies — just pure, frictionless capture. The accessibility was the key factor — I’m already using the app for messaging across devices and can easily programmatically access my data, so why not keep my notes there too?
At first, this felt like a hack. But over time, I realized it was a coping strategy — a way to sidestep the rigidity of tools that pretend to know what I need. I wasn’t choosing Telegram as an app; I was choosing a set of capabilities and interaction patterns that just happened to be bundled under the Telegram brand. The interface itself had become secondary to the functionality it enabled.
Most apps still pretend they’re the destination. Users seem to move on. With improved “literacy” they are cherry-picking capabilities now, sometimes quietly subverting the intended use-cases while the brands keep up appearances.
Professionals use Instagram bookmarks as moodboards, not because Instagram wants them to, but because it’s a fast way to collect visual fragments where they originate. Consumers turn to Reddit as their search engine when they want unfiltered opinions, not because Reddit is optimized for search, but because it’s where the real talk happens. Teams coordinate through WhatsApp instead of project management software when they need something lightweight and immediate.
These “off-label” uses are less about rebellion and more about survival — workarounds for a world of countless tools that rarely fit the shape of real life. The tool that feels most frictionless right now often wins, even if it was never meant for the job. And yet, despite this demand for flexibility, our data remains stubbornly locked inside these platforms — ever tried moving all your ChatGPT discussions to Gemini? (Fun…)
From loyalty to capability
From loyalty to capability
A decade ago, people wore their platform allegiance like a badge—“I’m an Apple person,” “I’m a Windows user.” These identities shaped purchases, social circles, careers, probably even marriages and child names. Now? The energy is gone. The badge is worn down to polished emptiness. While digital experiences have become more pervasive and essential to daily life, the shift isn’t only attributable to growing user sophistication — it’s exhaustion. People are tired of rigid ecosystems and the illusion of choice. We’re no longer satisfied with one-size-fits-all solutions or being locked into ecosystems by switching costs. Instead, we demand tools that adapt to our specific needs, preferences, and cognitive styles. What once felt like a luxury — customization, portability, the ability to remix workflows — now feels like a baseline expectation.
When every platform feels like a slightly different flavor of the same hamburger, what exactly are you loyal to? The sameness is numbing, and the differences are mostly cosmetic — icons, colors, a few gestures and micro-interactions swapped out here and there. At a glance, it’s nearly impossible to tell iOS from HarmonyOS. Apple set the standard, the industry followed, and now the competitive advantage is gone. The real differentiators have faded as the industry converges on the same patterns, the same affordances, the same tired onboarding flows. The idea of being a “TikTok person” or a “Reddit person” is starting to sound like something from a different era — one where the boundaries between platforms actually mattered as a social differentiator. Now it feels more like people want to be themselves — with all the messy and layered complexity that comes with that notion.
We see this in how users migrate between social platforms: When Twitter’s experience and “cultural alignment” degraded (yeah…), people didn’t hesitate to try BlueSky, Mastodon or Substack, chasing better discourse, not brand. When TikTok faced bans, people flocked to the Chinese platform Xiaohongshu. What they found wasn’t just a replacement, but a different energy — cross-cultural curiosity, a welcoming friendliness, general laid-back openness. XHS didn’t just fill a gap; it’s community fit a mindset. And the platform pivoted fast, translating its UI to English in a week to catch the wave of “foreign friends”. The old model of monolithic apps with fixed interfaces is looking more and more like a relic. The value of these apps is evaporating as interface paradigms become standardized. Standardization has led to “interface agnosticism” — users simply don’t care which brand of interface they use, as long as it gets the job done and the platform running the interface allows for cultural alignment.
Interface Agnosticism: A design philosophy that decouples core functionality from specific interfaces, allowing the same underlying data and processes to be experienced through multiple interaction modalities while preserving context and coherence.
Vibe shift
Vibe shift
In early 2025, AI researcher Andrej Karpathy coined “vibe coding” to describe the flow that emerges when developers collaborate with AI. “I just see stuff, say stuff, run stuff, and copy-paste stuff — and it mostly works,” he said. The phrase caught on because it captured something that had been simmering for a while: working with AI is less about following a process and more about improvising with a partner who doesn’t care about your credentials or your mood.
Developers describe ideas in natural language, often casually, sometimes tipsy, sometimes blunt, and the AI “vibes back” by interpreting intent and generating code. It’s more like a late-night conversation than traditional coding. No need to perform for the models — they aren’t judging (unless you ask them to). The interface — whether a full IDE, chat, or voice input — becomes transparent. Vibe coding is interface agnosticism in action: developers switch between AI models based on the quality of the flow, not the interface. And “meta-interfaces” like Cursor that allow you to select a new model on every turn of your interactions make switching even easier.
What matters is the rhythm of the exchange, the sense of being understood (or at least, not misunderstood). This focus on “vibe” is spreading. It’s becoming the primary factor in how we choose and interact with AI across all domains. But while users are already vibing their way through workflows, most product teams (including most AI companies) are still sketching onboarding flows like it’s 2015, clinging to the idea that the interface and the hard features are the product, even as users quietly route around it.
Digital personalities
Digital personalities
What users perceive as “experience” is increasingly determined not by how an interface looks or the branding around it feels, but by the personality or “style” of the AI model behind it.
The focus shifts from how the environment mediating the collaboration feels to who the collaborator feels like. Visual and interactive elements — once the main target for interaction design — are receding as the character and capabilities of AI models take center stage. The interface is still there, but it’s becoming a kind of stage lighting, especially as best practices converge. The choices are less about features and more about the subtle, sometimes ineffable sense of fit — like picking a collaborator for a group project, or a bartender for a long night.
We’re switching between models based on subtle differences in their “vibes” and “personal fit”—not their interfaces. Some of us do that multiple times a day, or whenever the latest “update” breaks the personality we liked (yes, syncopath, I am looking at you). The churn is real, and the stakes are higher than they look, because every switch is a micro-negotiation about what kind of thinking you want to invite into your own head. There is a huge (market) potential for mix-and-match-ai-personality marketplaces and the hundreds of virtual boyfriends from the likes of character.ai are just the beginning.
However, moving data between these silos is still a nightmare. The only upside to manually recreating context (and yeah, that’s still mostly copy/paste) is that you prune what you don’t care about while being too lazy to copy everything over, potentially making the fresh instance more tightly aligned with our immediate goals. We’ll dig into the technical challenges and potential solutions for data portability in Part 3.
For now, let’s leave it here: the friction is there by design, but I suspect it’ll break down sooner or later. The trend of switching AI models based on their perceived “personalities” mirrors the broader shift away from platform loyalty. Just as users are breaking free from platform lock-in, they’re moving beyond feature-based decisions. Whether it’s social platforms or AI models, users are making granular choices based on “personality” rather than feature checklists. The features themselves are interchangeable — most social platforms offer the same core, just as most AI models can do the same tasks.
Eventually, nobody will care about the length of your context window if your model sounds like it has a stick up its manifold. What differentiates them, and what drives user choice, is their “vibe”: their personality, their way of thinking, their cultural alignment.
Why vibes matter
Why vibes matter
The significance of this choice becomes clear when we consider how existential it is. We’re not just picking tools for convenience — we’re delegating cognition. When we choose based on vibe, we’re prioritizing the qualitative over the quantitative. We care more about how a tool shapes our thinking than what it can do on paper. The stakes are high, even if the process feels casual — because every turn of an interaction is a small act of trust, a bet on which system gets to filter your reality.
By choosing one model over another, we’re deciding which system gets primary “write access” to our cognition, filtering our information uptake and potentially shaping our understanding.
You’ll hear people say “Claude just gets me,” or “GPT’s tone isn’t my style,” or “I like Grok’s bullshit.” These aren’t software reviews — they’re emotional, subjective assessments, statements of trust. Choosing a model isn’t just picking a helper. It’s choosing whose voice gets amplified in your own thinking. The models aren’t just “assistants”—they’re gatekeepers, and the gate is your own attention.
In “Fall; or, Dodge in Hell,” Neal Stephenson anticipated this shift. He described a world where people developed strong preferences for their personal information “editors”—not just as tools, but as extensions of identity and status:
This all had to do with editors. If you were the kind of person who was enrolled at Princeton, you tended to speak of them as if they were individual human beings. The Toms and Kevins of the world, and most of the population of this town, were more likely to club together and subscribe to collective edit streams. Between those extremes was a sliding scale. Few people were rich enough to literally employ a person whose sole job was to filter incoming and outgoing information.
Stephenson’s vision is prescient. Today, we’re already seeing the rise of personalized AI assistants that filter and present information according to our preferences. Our choice of these tools is becoming an expression of identity and values. The more we rely on these systems, the more their quirks and biases become part of our own cognitive landscape.
In this landscape, agency is paramount — not in choosing platforms, but in selecting the editors that resonate with our thinking patterns, values, and needs. Access to specialized, high-quality AI models that “just get us” may become a serious competitive advantage. This choice of alignment — trusting one model’s filtering over another’s (let’s be geopolitical and say, DeepSeek over OpenAI) — becomes a critical decision. The prevalence of “alignment” serving organizational goals over user well-being only underscores the importance of open-source research and open-weight models.
Who shapes your informational reality? Who gets to decide what you see, and what you never even notice? Just as some writers have preferred certain word processors or photographers specific editing software, knowledge workers are developing strong preferences for particular AI models based on how they process and present information, how they “feel” to interact with, and how well they align with the user’s own thinking style. The tools are becoming part of the process, and the process is becoming part of the self.
Toward fluid experience
Toward fluid experience
The trends we’ve explored — crumbling platform allegiance, interface agnosticism, personality-driven choice — point to a fundamental shift: users now expect continuity. We want our context, preferences, and momentum to follow us as we move between tools, platforms, and even collaborators. The expectation is no longer just technical; it’s psychological. We want our systems to remember the shape of our work, not force us to reassemble it every time we shift gears.
But today’s implementations still lag behind that expectation. Most experiences remain stuck inside single ecosystems, siloed by app boundaries and brittle assumptions about “sessions” and “documents.” The more fluid our digital lives become, the more intolerable even small points of friction or misalignment will feel.
Yet, beneath this desire for continuity lies something deeper: movement. Not just movement between apps or platforms, but between ideas, collaborators, and contexts — without losing momentum, memory, or agency. We expect to be able to move — across modalities, models, and environments — and we expect our context to move with us.
Emerging AI systems promise to help — offering adaptivity, intelligence, and context-awareness — but most of what’s being built today still falls short. Before we can imagine what true movement might look like — ephemeral environments materializing and dissolving around evolving intent — we need to critically examine the paradigms shaping today’s AI agents.
That’s where we’ll pick up next: a closer look at why today’s agents, despite their promise, aren’t yet extending our thinking — and what it will take to build systems that truly can.