The 2025 Philippine midterm elections are over. Ballots have been cast, winners proclaimed, and campaign jingles have faded into the background. But as the dust settles, a new question rises to the surface: Did the election process unfold the way it should have?
This is where election forensics comes in—examining how votes were cast, transmitted, and tallied, and asking whether the numbers tell a story that makes sense. Because in every election, it’s not just about who wins—it’s about how the results came to be. And in Election Forensics 2025, the process matters as much as the outcome.
When the numbers raise eyebrows—when patterns shift sharply, when data behaves in ways that defy expectations—we owe it to ourselves to ask why. Not with suspicion. Not with cynicism. But with clarity.
Election forensics is the science of analyzing electoral data for anomalies. It’s not about alleging fraud. It’s about asking grounded, data-driven questions: Does this trend make sense? Is this voting pattern statistically likely? If not, what could explain the deviation?
In a healthy democracy, these questions are not signs of distrust—they are signs of participation. They show that citizens are watching, asking, and thinking critically about how their votes are handled.
This year’s midterm elections in the Philippines delivered results that, at face value, seemed routine. But a deeper look at the numbers tells a more complex story.
1. Survey vs. Result Discrepancies
Two of the most respected polling agencies in the country—Social Weather Stations (SWS) and Pulse Asia—released senatorial preference surveys just days before the elections. Historically, their rankings closely mirrored the actual vote. But this time, several candidates who weren’t in the top 12 jumped significantly in rank, while consistent front-runners dropped or barely made it.
This kind of swing isn’t just unusual—it’s statistically rare. It prompts a fair question: Did voter preferences shift drastically in the final days, or did something unexpected happen in the vote-counting process?
2. Compressed Vote Transmission Spikes
One of the biggest red flags appeared in the vote transmission timeline. A graph tracking the cumulative number of votes transmitted nationwide showed a steep, almost uniform spike in a very short period.
At first, that might look like efficiency. But in election data, uniformity can be a red flag, not a reassurance. In a country as geographically diverse as the Philippines—with thousands of islands, varying internet speeds, and complex logistical challenges—vote transmissions are expected to be staggered. A sudden bulk upload, particularly if those votes are demographically similar, begs a second look.
3. Unusual Uniformity Across Precincts
Another pattern worth examining is the repetition of vote shares across precincts. While voting behavior often follows predictable trends, especially within regions or demographics, perfect or near-perfect uniformity across thousands of precincts is statistically improbable.
This doesn’t prove wrongdoing. But it does signal the need for transparency. People in different regions vote differently—and when data suggests otherwise, there should be a clear, evidence-based explanation.
“Transparency is not a threat to legitimacy—it is its foundation.”
Filipinos are no strangers to high-stakes elections. We’ve seen manual counts, automated systems, and everything in between. And we’ve learned that the only real guarantee of trust in the outcome is transparency.
Yet this year, access to precinct-level data, ballot images, transmission logs, and audit trails has been limited. These are not luxury requests. They are standard tools of accountability. Without them, independent verification becomes impossible, and public trust becomes fragile.
This is not a call for a full recount. It’s a call for a random, transparent manual audit—a statistical sampling of precincts where votes are counted and matched by hand. Not to prove fraud, but to affirm confidence. If we trust machines to tally our democracy, we must also trust people to check their work.
This election cycle also marked a significant change: a new technology provider managed the vote-counting infrastructure. With any new system, there’s a learning curve. Bugs happen. Glitches are normal. But without full disclosure, even minor issues can snowball into mistrust.
Let’s be clear: election forensics does not assign guilt. It does not claim intent or fabricate narratives. What it does is offer a scientific lens—a way to interpret numbers, flag irregularities, and guide deeper investigation.
And ultimately, it invites all of us—citizens, journalists, watchdogs, educators—to be part of the democratic process beyond the ballot.
Because democracy doesn’t end on election day. It lives in our vigilance.
Let’s keep reading between the vote counts. Not because we expect fraud. But because we expect better.
Dr. Richard Jonathan O. Taduran is a forensic scientist and anthropologist. He teaches, writes, and reflects on how science intersects with justice, memory, and the Filipino experience.






















