Friday, January 23, 2026

When Algorithms Shape Reality: How AI and Social Media Are Rewiring How We Think

 

When Algorithms Shape Reality: How AI and Social Media Are Rewiring How We Think

Last November, Adam Aleksic delivered a five-minute TED talk that landed harder than presentations three times its length. His premise is deceptively simple: the AI tools and platforms we use daily aren't showing us reality—they're showing us a filtered, amplified, distorted version of it. And we're absorbing that distortion without realizing it.

I've been sitting with this one for a while. As someone who lives at the intersection of technology strategy and organizational transformation, I recognize the pattern Aleksic describes. I've seen it play out in enterprise systems, in user behavior, in the subtle ways digital tools reshape the humans who use them.

The Language We're Learning Isn't Ours

Here's the detail that stopped me cold.

ChatGPT uses the word "delve" at rates far exceeding normal English usage. The likely explanation? OpenAI outsourced portions of its training process to workers in Nigeria, where "delve" appears more frequently in everyday speech. A minor linguistic quirk from a specific population was reinforced during training and is now reflected among hundreds of millions of users worldwide.

But it doesn't stop there. Multiple studies have found that since ChatGPT's release, people everywhere—not just users—have started saying "delve" more often in spontaneous conversation. We're unconsciously absorbing the AI's patterns and mirroring them back.

As Aleksic puts it: "We're subconsciously confusing the AI version of language with actual language. But that means that the real thing is, ironically, getting closer to the machine version of the thing."

Read that again. The real is conforming to the artificial.

The Feedback Loop No One Asked For

This isn't just about vocabulary. Aleksic points to Spotify's "hyperpop" genre as a case study in algorithmic reality creation.

The term didn't exist in our cultural vocabulary until Spotify's algorithm identified a cluster of similar listeners. Once the platform created a playlist and gave the phenomenon a label, it became more real. Musicians started producing hyperpop. Listeners began identifying with or against it. The algorithm continued to push, and the cluster expanded. What started as an algorithmic observation became a cultural movement.

The same pattern drives viral trends—matcha, Labubu toys (the world is going crazy), and Dubai chocolate (I see them everywhere from Costco to World Market). An algorithm identifies latent interest, amplifies it among similar users, and, suddenly, businesses and influencers create content around what may have been an artificially inflated trend. We lose the ability to distinguish between organic cultural shifts and manufactured momentum.

The Uncomfortable Question

Aleksic doesn't shy away from the deeper implications.

"Evidence suggests that ChatGPT is more conservative when speaking the Farsi language, likely because the limited training texts in Iran reflect the more conservative political climate in the region."

If AI systems inherit the biases of their training data—and they do—what happens when millions interact with those systems daily? What range of thoughts do we stop considering because the algorithm never surfaced them? What possibilities get filtered out before we ever encounter them?

Elon Musk regularly modifies Grok's responses when he disagrees with them, then uses X to amplify his own content. Aleksic asks the obvious question: Are millions of Grok and X users being subtly conditioned to align with Musk's ideology?

These platforms aren't neutral. Everything in your feed or your chatbot response has been filtered through layers of optimization—what's good for the platform, what makes money, and what conforms to the platform's necessarily incomplete model of who you are.

Thinking About Thinking

Twenty-two years in global technology leadership has taught me something about systems: they shape behavior far more than we acknowledge. The tools we build eventually build us. The interfaces we design become the cognitive architecture through which users experience their work, their relationships, their world.

What Aleksic describes is that phenomenon at civilizational scale.

"TikTok has a limited idea of who you are as a user," he notes, "and there's no way that matches up with your complex desires as a human being."

And yet we scroll. We engage. We absorb. We mirror back.

The Only Defense

Aleksic's antidote is persistent self-interrogation:

Why am I seeing this? Why am I saying this? Why am I thinking this? Why is the platform rewarding this?

Simple questions. Difficult discipline.

"If you're talking more like ChatGPT," Aleksic concludes, "you're probably thinking more like ChatGPT as well, or TikTok or Spotify. If you don't ask yourself these questions, their version of reality is going to become your version of reality."

There's something almost spiritual in that warning. The ancient disciplines of self-examination—examine yourselves to see whether you are in the faith—take on new urgency when the voices shaping our inner dialogue aren't human at all.

The question isn't whether these tools are useful. They are. The question is whether we're using them—or being used by them.

Stay awake. Stay questioning. Stay real.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.