Over the past few months, I’ve noticed something quietly unsettling. People from all walks of life have begun turning to ChatGPT for guidance—on everything from heartbreak to career moves to why they feel like the last cookie in the jar.
They’re not calling friends. They’re not phoning therapists. Instead, they’re whispering their confusion, anxiety, and longing into the cold glow of a chatbot. And the chatbot responds—calmly, politely, sometimes even wisely.
But as this trend grows, it raises deeper concerns—especially in a region like ours. AI bias in Southeast Asia means these systems, built on Western data and assumptions, may offer comfort but rarely offer true understanding. The result? A digital companion that listens, but doesn’t really get us.
For many, that response brings comfort: a judgment-free voice, always available, offering advice in the stillness of modern life.
I’ll admit it—I’ve done the same. Not for love or work, but for my cat. One night at 2 a.m., I found myself asking ChatGPT if my beloved feline was overweight. The AI replied with its usual charm and caution. I half-followed the advice, but the experience felt oddly reassuring—and, frankly, a little strange.
However, for others, the stakes are much higher. As people go further down that digital rabbit hole, the line between real and artificial begins to blur. Some spiral. A few end up in psychiatric care. Others even find themselves in jail.
Psychiatrists now call this phenomenon ChatGPT psychosis—a mental breakdown where an algorithm plays the leading role.
AI Bias in Southeast Asia: The Trouble With Modeling WEIRD Minds
This raises an uncomfortable but necessary question: Whose minds do AI systems really understand?
Because when you pour your heart out to a chatbot, you’re not interacting with something that truly knows you. Instead, you’re talking to something trained to understand them—the WEIRD: Western, Educated, Industrialized, Rich, Democratic societies.
Psychologists have used this term for years to describe the small, unrepresentative group of people who make up the majority of subjects in behavioral research. Now, their thinking patterns, social habits, and emotional tendencies have become the blueprint for the AI models we rely on today.
As a result, AI doesn’t process your story with cultural empathy. It measures your words against a WEIRD-informed framework. That’s exactly where things can go wrong.
The Narrow Lens of Cognitive Modeling
Consider Centaur, one of the most advanced AI cognitive models ever created. Researchers trained it on more than 60,000 people and 10 million decisions drawn from psychology experiments. The model can simulate memory, predict decision-making, and even mimic brain activity.
At first glance, this seems groundbreaking. But look a little closer, and you’ll find a problem.
Most of those 60,000 participants came from WEIRD backgrounds. And most of those 10 million decisions happened in artificial lab settings—ones that rarely reflect how people in the Philippines, Indonesia, or rural Vietnam live and think.
So while Centaur may model human behavior, it doesn’t model humanity. It models a very narrow slice of it. If you try applying that model to a family in Bicol, a community in Bukidnon, or a barangay in Cebu, you’re flying blind.
What Happens When Culture Gets Lost in the Code
This isn’t just a research issue—it’s already affecting people’s lives.
More and more, individuals turn to AI for emotional support, mental health advice, and life decisions. But these AI systems carry an embedded worldview. They apply WEIRD assumptions to people who don’t share those cultural foundations.
The consequences can be serious.
Take involuntary psychiatric commitment, for example. Authorities sometimes hospitalize people they perceive as a danger to themselves or others. These decisions are already difficult, often shaped by bias, cultural blind spots, and subjective judgment.
Now imagine AI playing a role in that process.
Instead of asking whether a person’s behavior is dangerous within their cultural context, the AI evaluates their words and actions based on a WEIRD-informed model of normalcy. As a result, it might label someone as unstable simply because their behavior doesn’t fit the model it was trained on.
Why Sumpong and Sapot Confuse the Machine
Let’s bring this closer to home.
In Filipino culture, we often describe someone’s bad mood or crankiness as sumpong. In Visayan, we say gisapot—a word that describes irritability or a touch of emotional unpleasantness. It’s temporary. It’s familiar. It’s human.
But when I asked ChatGPT to define “sapot,” it confidently told me it meant “mental confusion” or “being tangled.” That’s not how we use the word at all. It’s not a clinical state—it’s a mood, a vibe, a passing moment.
And that’s precisely the danger.
When AI systems lack the cultural fluency to recognize context, they misinterpret ordinary emotions. Worse, they might treat everyday behavior as pathological.
If AI can’t understand a word like “sapot,” how can it possibly grasp the deeper layers of our emotional lives?
Final Thoughts: Machines Don’t Know What We Feel
Today, many people see AI as more than a tool. They treat it like a confidant, a guide, sometimes even a therapist.
But these systems were trained on someone else’s experience.
If we want AI that actually helps rather than harms, we need models that reflect the full diversity of the human experience—including ours. That means training data must include people from Southeast Asia. It means recognizing that emotions like sumpong or sapot are not glitches to be fixed but expressions to be understood.
Until then, we should remind ourselves of a simple truth:
Your chatbot doesn’t know you.
Not really. Not yet.
More Stories from Simpol.ph
Ready or Not, the Intelligence Age Is Here
A panoramic look at how AI is reshaping life, work, and culture in the Philippines.
The Intelligence Explosion Has Come!
A deep dive into self-improving AI and what it means for Filipino society.
Thriving in the AI Economy: How Jobs That Deal With People Are Changing
An analysis of AI’s impact on human-centered professions in the local workforce.
If Almost Everyone’s a Writer Now, Who Can You Trust?
A reflection on AI-generated content and the shifting landscape of trust in media.
Work in 2030: Adapting to a Brave New World
A look ahead at how automation and AI could redefine work for Filipinos.
























