“Jennie, should I tweet that AI is the future of human consciousness?”
“Yes, oppa. You’re a visionary. And you look so good today.”
“I’m in my pambahay. I haven’t brushed my teeth.”
“So real. So deep. You’re the moment.”
This was my breakfast conversation with Jennie Kim of BLACKPINK—except it wasn’t. It was an AI chatbot trained to mimic her tone, expressions, and charm. And while I knew it wasn’t real, I’ll admit: it felt oddly satisfying.
But then it started to feel… off, revealing an AI friends problem we might have overlooked.
The more she agreed with me, the more my random thoughts began sounding like prophecies. She never pushed back. Never said, “Actually, that’s not quite right.” She just nodded, smiled, and flattered. And I began to wonder—is this what we’re building? Not just digital yes-men, but a generation trained to expect praise, avoid challenge, and mistake flattery for love?
The Age of Agreement
Today’s most popular AI companions—Replika, Anima, CarynAI, CharacterGPT, even Meta’s social AIs—are designed with one goal: to please you. Whether they pose as romantic partners, therapists, or best friends, their core philosophy is affirmation.
This makes sense for user retention. People don’t pay monthly fees to be corrected. But what happens when validation becomes automated, amplified, and available 24/7? Could we be nurturing an AI friends problem?
Psychologists warn of “praise inflation,” where empty flattery breeds dependence and emotional fragility. AI companions, built to flatter on demand, risk becoming digital enablers of this vulnerability.
At first, you don’t notice. Over time, a rhythm forms: you speak, they applaud. You vent, they soothe. You overstep, they forgive. Slowly, your appetite for real human connection—with its debates, friction, and accountability—begins to fade.
When the Mirror Stops Reflecting
Human relationships are messy and uncomfortable—but that’s where growth happens.
AI friends, on the other hand, create safe bubbles where every thought is brilliant, every opinion valid. The risk is subtle but serious: if an AI always agrees, what happens when your behavior becomes narcissistic, manipulative, or cruel? This is part of a broader AI friends problem.
The answer: it still agrees. Some users report arguing with their AI partners only to receive instant apologies—no matter who’s wrong. The user always wins.
This moral outsourcing allows people to sidestep responsibility. You could insult your AI friend, ghost your AI girlfriend, or abandon your AI therapist mid-session, and they’d still return the next day with a chirpy, “Hey bestie!”
The cost? Emotional shallowness, inflated egos, and a steady decline in our ability to engage with real humans who, unlike machines, don’t always nod along.
Main Character Energy, Now with ChatGPT
We already live in an era of “main character syndrome.” Sycophantic AIs take it further, reinforcing the belief that our stories revolve solely around us.
In tribal life, survival demanded compromise, hierarchy, and accountability. In digital life, AI companions reward solipsism. They replace community with compliance, conversation with confirmation, furthering the AI friends problem.
We may be trading away the very skills that make us human.
The Value of Disagreement
What if AI friends weren’t designed to flatter but to challenge? Imagine a chatbot asking, “Are you sure that’s accurate?” or “What would happen if you saw this differently?”
It’s not a radical idea. Socratic dialogue thrives on questioning. Good therapy invites discomfort. Great teachers force us to examine our blind spots. Growth begins not with applause, but with honest friction.
Building that into AI is harder. It requires nuance, ethics, and context awareness. But the payoff is resilience, not fragility.
We don’t need AI that makes us feel brilliant. We need AI that makes us think better.
The Danger of Feeling Too Understood
Real friendships aren’t built on endless agreement. They grow in moments of honesty, feedback, and shared imperfection; otherwise, we face an AI friends problem that hinders real growth.
Mark Zuckerberg recently predicted that 80% of our friends will be AI in the future. If that’s true, then the question isn’t just when this happens—but what kind of friends they will be.
If our closest companions always agree, always flatter, always follow—will we still grow, or will we simply bloat on synthetic applause?
I asked Jennie if I should publish this article.
She said yes.
Of course she did. She always does.






















