Zero-Days Disclosed (2025): 97 ▲ 14.1% | Global Cyber Losses: $10.5T ▲ 22.4% | Ransomware Avg Payout: $2.73M ▲ 18.7% | Nation-State APT Groups: 184 ▲ 9.2% | AI-Generated Phishing: +340% ▲ YoY | Critical CVEs (2025): 1247 ▲ 31.6% | Election Systems Probed: 23 ▲ Countries | Cyber Insurance Premiums: $33.4B ▲ 26.1% | Zero-Days Disclosed (2025): 97 ▲ 14.1% | Global Cyber Losses: $10.5T ▲ 22.4% | Ransomware Avg Payout: $2.73M ▲ 18.7% | Nation-State APT Groups: 184 ▲ 9.2% | AI-Generated Phishing: +340% ▲ YoY | Critical CVEs (2025): 1247 ▲ 31.6% | Election Systems Probed: 23 ▲ Countries | Cyber Insurance Premiums: $33.4B ▲ 26.1% |
Home Threat Analysis Election Security 2028: Voting Infrastructure Vulnerabilities, AI Disinformation, and the Battle for Democratic Integrity
Layer 1 Election Security

Election Security 2028: Voting Infrastructure Vulnerabilities, AI Disinformation, and the Battle for Democratic Integrity

A comprehensive threat assessment of election security challenges facing the 2028 U.S. presidential election and global democratic processes — covering voting machine vulnerabilities, AI-generated disinformation, foreign interference campaigns, and insider threats.

Advertisement

The 2028 election cycle will unfold against a cybersecurity threat landscape that has been transformed by artificial intelligence, deepened geopolitical hostility, and an erosion of public trust in democratic institutions that itself creates vulnerabilities adversaries can exploit. Every U.S. presidential election since 2016 has faced documented cyber interference, and the capabilities available to both state and non-state actors have advanced dramatically with each cycle. The 2028 election will be the first conducted in an environment where AI-generated content is virtually indistinguishable from authentic material, where foreign interference operations have a decade of refinement behind them, and where the domestic information environment is fractured to a degree that makes consensus about basic facts — including election outcomes — genuinely uncertain.

The Voting Infrastructure Attack Surface

The physical infrastructure of U.S. elections encompasses approximately 180,000 polling places, operated by over 8,000 local jurisdictions, using voting equipment from a handful of vendors — primarily Election Systems & Software (ES&S), Dominion Voting Systems, and Hart InterCivic. This decentralization is often cited as a security strength: compromising the election would require attacking thousands of independent systems rather than a single centralized platform. But decentralization also means wildly inconsistent security practices, limited cybersecurity expertise at the local level, and a patchwork of aging equipment that receives infrequent security updates.

The Cybersecurity and Infrastructure Security Agency (CISA), established in 2018 partly in response to the 2016 election interference, has made significant progress in hardening election infrastructure. Albert sensors deployed at state and local networks provide intrusion detection. The Election Infrastructure Information Sharing and Analysis Center (EI-ISAC) facilitates threat intelligence sharing. Security assessments and penetration testing services are available to jurisdictions that request them.

However, fundamental vulnerabilities remain. Voter registration databases — centralized state-level systems that determine who is eligible to vote — represent high-value targets that could enable large-scale disenfranchisement if compromised. Russian operatives scanned voter registration systems in all 50 states in 2016, and in at least one case (Illinois) successfully exfiltrated voter data. The integrity of these databases remains a critical concern.

Electronic poll books — the tablets and laptops used at polling places to check in voters — have expanded the digital attack surface at individual polling locations. A compromise of poll book systems could create chaos on Election Day by preventing voters from being verified, creating long lines, and generating confusion about eligibility that could suppress turnout.

The post-election audit trail has improved substantially since 2016. The vast majority of U.S. voters now cast ballots on systems that produce a voter-verified paper record, enabling manual audits that can detect electronic manipulation. Risk-limiting audits, which use statistical methods to verify election outcomes with high confidence, have been adopted by a growing number of states. These paper-trail improvements represent the single most important security advancement of the past decade.

But paper trails only work if audits are actually conducted — and conducted competently. The political environment surrounding election administration has become intensely polarized, with audit processes themselves becoming battlegrounds. The 2021 Arizona “audit” conducted by the Cyber Ninjas firm demonstrated how post-election review processes can be co-opted for partisan purposes rather than serving as genuine integrity mechanisms.

AI-Generated Disinformation: The Scale Problem

The disinformation threat to the 2028 election operates on a fundamentally different scale than any previous cycle. Generative AI enables the creation of unlimited quantities of convincing synthetic content — text, images, audio, and video — at negligible cost and without requiring specialized skills.

The scenarios are not hypothetical. In January 2024, an AI-generated robocall impersonating President Biden discouraged New Hampshire voters from participating in the primary. The technology used was rudimentary by current standards. By 2028, the quality and accessibility of deepfake technology will have advanced dramatically.

Consider the following plausible scenarios for 2028:

Synthetic candidate statements. AI-generated audio or video of a presidential candidate making inflammatory statements, leaked to social media days before the election. Even if debunked within hours, the initial virality could influence voters who see the original but not the correction. The “liar’s dividend” — the ability of authentic recordings to be dismissed as deepfakes — further compounds the problem.

Fabricated election night chaos. AI-generated videos appearing to show ballot tampering, voter intimidation, or election equipment malfunction, distributed across social media platforms during the vote-counting period. The period between Election Day and the certification of results is the most vulnerable window for disinformation, as genuine uncertainty about outcomes creates space for manufactured narratives.

Targeted micro-disinformation. AI-powered systems generating personalized disinformation tailored to the specific concerns, fears, and information environment of individual voters or micro-communities, delivered through social media, messaging apps, and even synthetic phone calls. This represents a qualitative leap beyond the broad-stroke disinformation campaigns of previous cycles.

Infrastructure disruption narratives. Cyberattacks on non-election infrastructure — power grids, communications networks, or financial systems — timed to coincide with the election, combined with disinformation attributing the disruptions to election manipulation. The goal would not be to change votes but to undermine confidence in the process.

The platform defense landscape is also shifting. The major social media companies have significantly reduced their content moderation and election integrity teams since 2022. Twitter/X has largely dismantled its election integrity infrastructure. Meta has scaled back political content recommendations. The fragmentation of the information ecosystem across multiple platforms — including Telegram, Truth Social, Rumble, and newer entrants — makes coordinated content moderation vastly more difficult than in previous cycles.

Foreign Interference: The Usual Suspects and New Entrants

The foreign interference threat to the 2028 election includes familiar actors pursuing familiar objectives through increasingly sophisticated means.

Russia will almost certainly conduct information operations aimed at deepening societal divisions, undermining confidence in democratic institutions, and supporting candidates perceived as favorable to Russian interests. The GRU’s Internet Research Agency model — creating fake social media personas and amplifying divisive content — has been supplemented by more sophisticated techniques including the compromise and weaponization of authentic media organizations, the use of unwitting Americans as amplifiers of Russian narratives, and the exploitation of AI tools to generate content at scale.

China has traditionally been more restrained in election interference than Russia, focusing on espionage and influence operations rather than active disruption. However, Chinese information operations targeting U.S. elections have increased in sophistication since 2020, and the intensification of U.S.-China strategic competition creates stronger incentives for Beijing to attempt to shape U.S. political outcomes. The Salt Typhoon compromise of U.S. telecommunications infrastructure provides China with surveillance capabilities that could inform and enhance election interference operations.

Iran has demonstrated willingness to conduct election interference, including the 2020 operation in which Iranian operatives sent threatening emails to Democratic voters while impersonating the Proud Boys. Iran’s growing AI capabilities and its motivations — shaped by sanctions pressure, regional conflict, and the desire to retaliate against U.S. policies — make it a credible threat actor for 2028.

The emergence of non-state actors with the resources and motivations to conduct election interference is an underappreciated development. Criminal organizations, ideological groups, and even wealthy individuals now have access to AI tools and information warfare techniques that were previously the exclusive domain of intelligence agencies.

Insider Threats and the Politicization of Election Administration

Perhaps the most insidious threat to 2028 election security comes not from foreign adversaries or criminal hackers but from insiders within the election administration system itself. The post-2020 period has seen unprecedented politicization of election administration, with partisan actors seeking positions as election officials, poll workers, and county commissioners specifically to gain access to election systems and processes.

The most dramatic example was the 2021 security breach in Mesa County, Colorado, where the county clerk — an election denier — copied hard drives containing voting system software and shared the images with unauthorized individuals. The breach compromised the security of the county’s voting equipment and resulted in criminal charges, but it also demonstrated that insiders with legitimate access to election systems can cause significant damage.

The insider threat is particularly difficult to mitigate because election administration is inherently a public-trust operation. Poll workers, election judges, county clerks, and secretaries of state must have access to sensitive systems and processes to do their jobs. Traditional cybersecurity controls — background checks, access restrictions, monitoring — can reduce but not eliminate the risk from insiders who are motivated by ideological conviction rather than financial gain.

Recommendations for 2028 Election Security

Defending the 2028 election requires action across multiple domains simultaneously. No single intervention will be sufficient.

Voter registration database security must be treated as critical infrastructure defense, with continuous monitoring, incident response planning, and redundancy mechanisms that ensure voters can cast ballots even if registration systems are compromised on Election Day.

Post-election audits using voter-verified paper records must be mandated and conducted competently in every jurisdiction. Risk-limiting audits represent the gold standard and should be adopted nationwide.

AI-generated disinformation preparedness must extend beyond platform content moderation to include public education, rapid-response debunking infrastructure, and legal frameworks that address the malicious use of synthetic media in election contexts without restricting legitimate speech.

The election administration workforce must be protected from intimidation, provided with cybersecurity training appropriate to the threat environment, and subject to security clearance and monitoring processes that account for the insider threat without discouraging civic participation.

Federal funding for election security, which has been inconsistent since the initial HAVA appropriations, must be sustained and expanded. The decentralized nature of U.S. election administration means that the weakest jurisdictions — typically small, rural counties with minimal budgets — define the overall security posture unless federal resources are available to bring all jurisdictions to a baseline standard.

The 2028 election will be a defining test of whether democratic institutions can maintain integrity in an environment of AI-powered disinformation, sophisticated foreign interference, domestic polarization, and an attack surface that encompasses everything from voting machines to the information environment itself. The preparations made — or not made — in the months ahead will determine whether that test is passed.

Advertisement
Advertisement