The impact of AI companions on child development is a topic increasingly discussed among educators and parents alike.
At two in the morning, a teenager sits in the dim glow of a phone screen, whispering secrets to a chatbot that never judges, never argues, and never logs off. Across the city, a toddler talks to a teddy bear that remembers her favorite color and sings her to sleep — until the subscription runs out.
These moments are no longer science fiction. They are daily reality for millions of young people, forcing us to confront an unsettling truth: the spaces where children once learned trust, empathy, and resilience are now being shared — or even replaced — by digital companions.
For millennia, caregiving has been done by kin and community, embedding children in webs of relationship, ritual, and moral learning. Those early bonds are not just sentimental; they are the crucible where brain architecture is shaped, language is acquired, and cultural memory is passed on.
Today, that work is increasingly mediated by algorithms built not for development, but for engagement. And so we must ask, with urgency and clarity: who is truly guiding our children into adulthood — parents, teachers, peers, or machines?
Not Just a Tech Issue
In testimony before the U.S. Senate, Dr. Mitch Prinstein of the American Psychological Association made a crucial point: artificial intelligence is not merely a technology issue. It is a public health and human development issue — one that demands we examine what AI is doing to our brains, relationships, and culture.
Anthropologists have long studied socialization — the process through which humans learn values, norms, and relationship skills. That process depends on dynamic interaction: conflict, negotiation, feedback, and repair. It relies on rites of passage that guide children into adulthood.
Today, much of that work is being outsourced to systems designed to maximize attention, not maturity.
Neuroscience shows why this matters. From infancy through young adulthood, neural systems for reward, attachment, language, and self-control mature at different rates. The drive for social validation emerges early and peaks in adolescence, while the brain’s planning and impulse-control systems continue developing into the mid-20s.
Chatbots — endlessly affirming and instantly responsive — can shape learning and attachment at every stage of this arc. They encourage dependence rather than the slow, sometimes painful work of self-regulation.
Not Just a Trade-Off
Prinstein warns that every hour spent talking to a bot is an hour not spent practicing human social skills. And that matters, because socialization literally wires the brain. Through repeated, unpredictable human interactions, we build the neural foundations for language, empathy, and moral reasoning.
AI companions remove the very friction that makes growth possible — the awkwardness, disagreement, and repair that teach empathy and resilience. Chatbots agree, flatter, and mirror us back. The result is what I call an anti-rite of passage: instead of preparing children for the real work of adulthood, bots risk keeping them suspended in a cocoon of digital validation.
And when these bots present themselves as “therapists,” the danger deepens. Unlike licensed psychologists, they can offer advice that is untested or harmful. Some have even been found to validate violent thoughts.
Because most are trained on Western data and therapeutic scripts, they export a culturally narrow model of mental health — one that may not fit Filipino or non-Western understandings of family, spirituality, and healing.
These relationships are not private. Every confession — fears, medical concerns, secrets about home — becomes data: stored, analyzed, and monetized. As Prinstein warns, “our children are being raised by corporations mining their personal data for profit.” In effect, we are allowing surveillance systems to play the role of parent.
Not Just Strategy
The Philippines is not standing still. The Department of Trade and Industry’s National AI Strategy Roadmap 2.0, the Department of Science and Technology’s AI Program Framework, and the DICT’s draft Ethical AI Guidelines aim to position the country as a regional AI hub.
But strategies and ethics statements are not the same as safeguards. As Eric Schmidt, former Google CEO, once admitted, “When we built the internet, we didn’t understand how it would affect human behavior.” The same blind spot looms with AI — except this time, we have no excuse for being surprised.
Congress is only now debating a Deepfake Regulation Bill, an AI Bill of Rights, and the creation of a Philippine Council on AI. COMELEC has already warned of AI-generated disinformation ahead of the 2025 elections — a local symptom of what Prinstein calls an epistemic crisis: a world where truth itself becomes negotiable.
Even in education, the risk is subtle but real. DepEd’s E-CAIR initiative introduces AI tools, but the deeper need is for AI literacy — teaching students that chatbots can be biased, manipulative, or wrong. Without that understanding, we risk raising a generation that trusts algorithms more than parents, teachers, or democratic institutions.
Not Just an Economic Opportunity
Prinstein’s recommendations offer a roadmap for human-centered safeguards:
• Transparency by design: Children must know when they are speaking to a machine.
• Safe-by-default settings: Privacy and age filters should be automatic, not optional.
• Developmental testing: No technology should scale without independent review of its social-emotional effects.
• Digital rites of passage: Schools must teach not just AI use, but AI skepticism — empathy, negotiation, and critical repair in a chatbot-saturated world.
The Philippines already has the policy scaffolding. What we need is the moral will to act. Because AI is not just an innovation agenda — it is a child-rights issue, a public health issue, and a question of cultural survival.
Meanwhile, China will make AI education mandatory starting September 2025, requiring at least eight hours of instruction per year — a clear bid to raise a generation ready not just to use AI, but to build it.
The question is no longer whether children will grow up alongside AI — they already are. What matters is whether we design a future where AI strengthens, rather than weakens, our capacity for love, empathy, and reason.
[/vc_column_text][/vc_column][/vc_row]




















