In our popular imagination, AI often arrives wearing the face of dystopia. We picture The Terminator: machines gaining autonomy, machines making decisions, machines determining the fate of humanity. It’s a cinematic fear that refuses to die, especially as AI systems grow more capable each year.
Yet every time I hear someone in tech confidently describe their system as “end-to-end,” “fully automated,” or “human out of the loop,” I can’t help but notice the irony. The real concern isn’t that machines might someday become like Skynet — it’s that we routinely forget how deeply dependent these systems still are on human intelligence at both ends of the process.
Every workflow begins with a human and ends with a human. AI occupies only the middle. It accelerates, predicts, drafts, summarizes, and transforms — but it does not originate meaning, and it does not finalize consequence. As 2025 draws to a close, the picture is clearer than ever: the future is not a machine that replaces us, but a loop that depends on us from start to finish.
Humans at the Start — Meaning, Purpose, Boundaries
Every AI task begins with a human decision long before the model generates a single word. A human chooses the problem worth solving. A human sets the purpose, defines the ethical boundaries, curates the training data, and determines what counts as a “good” answer.
AI does not wake up one day and decide to diagnose a disease, summarize a bill, analyze a paragraph, draft a policy recommendation, or interpret a cultural reference. Someone has to determine which task matters, which data is appropriate, and which risks must be avoided.
In anthropology, meaning always precedes action. Human beings don’t simply do things — we interpret them. We assign value, danger, urgency, relevance, and purpose long before any tool enters the picture. The same is true in AI: before a model can “know,” humans decide what knowing means. Before it can “reason,” humans define what counts as reasoning. Before it can “stay safe,” humans articulate what harm looks like.
My work as an AI trainer confirms this. Humans set the tone, context, truthfulness, and social obligations a model must respect. We decide what is misleading, what is responsible, what is biased, and what is acceptable. AI begins not with computation, but with culture, cognition, and judgment.
Machines in the Middle — Powerful, But Not Decisive
Once the groundwork is laid, AI becomes extraordinarily powerful. This is where the system shines: processing vast amounts of data, detecting patterns, generating drafts, and executing tasks at a speed impossible for humans.
But this power does not equal autonomy.
AI does not understand the stakes of a medical summary, the implications of a legal nuance, the emotional weight of a condolence message, or the cultural layers inside a Filipino metaphor. It does not understand why accuracy matters or when empathy is required. It performs transformations — not interpretations.
Anthropologically, AI resembles earlier tools. A stone blade could cut meat or carve wood — yet it never decided why or when. Tools magnified intention; they did not originate it. AI is simply a more complex continuation of that lineage. It extends human capacity but cannot choose goals, meaning, or purpose.
In the middle, the machine is brilliant — efficient, scalable, astonishing.
But it is not decisive.
Humans at the End — Interpretation, Context, Accountability
When the model completes its task, the workflow returns to us. Humans must assess whether the output is accurate, ethical, respectful, or meaningful. We decide whether something is usable, harmful, incomplete, or culturally off-key.
Interpretation is fundamentally human. It draws on lived experience, moral reasoning, emotional intelligence, and social accountability — none of which AI possesses. A model may propose an answer, but only humans can determine whether that answer belongs in the world.
And accountability remains the crucial closing principle. When AI causes harm — whether through misinformation, bias, or carelessness — it is not the model that bears responsibility. It is individuals, institutions, and societies. Consequence is not automatable.
This is what keeps AI grounded: humans revise its drafts, evaluate its claims, and make the final call. The last step — judgment — cannot be delegated.
The Human–Machine–Human Loop
The myth of fully autonomous “end-to-end” AI persists because it is appealing. It imagines a world where machines shoulder complexity while humans enjoy frictionless efficiency.
But that has never been how technology functions.
Tools — whether stone flakes or neural networks — are extensions of human intention. They amplify capacity, but do not replace meaning, responsibility, or agency.
Which brings us back to Skynet.
The dystopia of machines taking over begins not with the machine — but with the human choices that built, deployed, and abandoned oversight. The danger has never been autonomous intelligence. It has always been human decision-making: our ambition, our carelessness, our willingness to remove ourselves from the loop.
As 2026 approaches, the most honest thing we can acknowledge is this: AI begins with us and ends with us.
The machine — for all its power — remains in the middle.
The workflow is, and has always been:
Human → Machine → Human.
Rather than designing systems that erase ourselves, perhaps the next phase of AI should deliberately honor the human bookends — because meaning, purpose, and accountability remain ours.
Even in the darkest science fiction futures, the machines never start the war.
Humans do.
Read more Stories on Simpol.ph
Learning How to Learn in the Age of AI
Training the Dragon: The Promise of Artificial Superintelligence
Rage Bait: Oxford’s Word of the Year and the Engine of Digital Outrage






















