A.I. TARGETS THE ELECTION

Voting is based on information. Where that information comes from is extremely important and there are many sources. Every campaign is actively engaged in targeted marketing, begging for donations in order to fund their campaigns and beat out their competitors. The dissemination of facts and information in their marketing — whether correct, somewhat correct, out of context, or outright fictitious — is key to obtaining the public’s allegiance and votes.

In today’s world, facts and information can be highly manipulated no matter what form of media it arrives in from post cards to social media, television ads, emails and texts to public town hall meetings, televised debates, and political rallies. The onslaught of political rhetoric and mis- and disinformation aims to fuel voter engagement and garner votes. Unfortunately, this approach can also fuel verbal and physical violence.

There’s a lot of power in information and how it’s obtained and distributed to the masses even if the information is partially correct, wholly incorrect, or not true. Today, one of the leading fields impacting and determining legislative policies is termed “information warfare” (IW), the management of information and communication technology in order to gain competitive advantage of others.

IW is advanced through well-developed propaganda with carefully interspersed real facts and information. The media distributes this highly manipulated information from both sides of the congressional aisle. It’s then up to voters to interpret and make decisive decisions with the information that’s been distributed to them. But they have little ability to determine whether information — written, photo, audio, and video — is real, doctored, or completely fake.

Media, especially social media derived from computer systems and advanced technology, now has the upper hand in distributing information to countless voters. With Artificial Intelligence (AI) in the picture as an advanced emerging technology, it has changed the election playing field and will continue to do so with subsequent campaigns well into the future.

In addition to using AI for on-going deceptive ad campaigns, there are also malicious cyber-attacks occurring, something that was notable back in 2016, that have targeted campaigns and voters. Since then, AI has become increasingly effective as a disruptive element. A Pew research study was conducted recently about the 2024 presidential election and the results are listed below.

• 39% of Americans say that AI will be used mostly for bad purposes during the presidential campaign, compared with only 5% who say it will be used mostly for good purposes. Another 27% say it will be used about equally for good and bad.
• 57% of US adults — including nearly identical shares of Republicans and Democrats — say they are extremely or very concerned that people or organizations seeking to influence the election will use AI to create and distribute fake or misleading information about the candidates and campaigns.

Based on this research, it was stated that the majority of both parties continue to blame AI tech companies. They believe these companies have a societal obligation to prevent their platform’s AI-distributed fake information from being misused.

The research also stated that Democrats are more likely than Republicans to express this view. While it shows that Democrats’ views have changed little since 2020, the share of Republicans who now say AI tech companies have this responsibility is higher than four years ago.

Let’s take a look at the technology involved in AI disinformation and deep fakes. It starts with the AI algorithms that enable the machines to learn from the world. Generative AI, like Chat GPT, is a complex neural network that empowers algorithms. Unfortunately, the programs are unable to distinguish good from bad human behavior. Currently, with Generative AI, candidates can pinpoint issues, topics, brands, and influencers who can sway and motivate voters and consumers.

AI can deliver messages in a voice and language that speaks to individuals as though it were itself an actual voter. Political campaigns are now getting smarter by using these various innovative techniques to convince voters their candidate is the best.

Therefore, Generative AI poses the most significant and immediate threat to election platforms and systems. Generative AI excels at imitating authoritative sources, making it easier to deceive individuals and the general public by impersonating election officials and even forging official election documents.

At this point in time, the US has many adversaries who will do anything to affect 2024 election — and they will continue to do so with future elections. They’ve been developing their own AI technologies that target US networks and election platforms. In the future, adversaries could use generative AI to attack election platforms with far fewer resources than ever before. Unfortunately, that adversary can be an individual or a nation-state.

Since voters widely rely on and trust election officials and their websites to provide accurate information in the periods before and after Election Day, those officials and their websites become attractive targets for impersonation, spoofing, and hacking.

Generative AI makes it easier to simulate election websites by producing HTML code, stock images, extremely realistic portrait photos, and website text. It also allows an antagonist to create favorite audio and video content with only a few clicks.

How this happens is that in performing their duties, election officials make public appearances and give interviews, thus creating the source material needed to generate fake content. This creates a veritable unlimited cache for deep fakes that can be generated of public officials.

Unfortunately, the threat of impersonation extends to fake social media accounts as well. After X (formerly known as Twitter) changed its verification rules, a wave of imposter accounts flooded the elections platform posing as federal and local agencies.

Starting in 2022, X offered government accounts, commercial companies, business partners, major media outlets, publishers, and some public officials an “official” Gray Checkmark that indicates their identity has been verified. To date, the accounts of many of the country’s most populous localities are still not able to be verified.

Tech in politics is here to stay. But more needs to be done about vetting and accountability so that voters can vote with confidence knowing that the information they’re receiving is legitimate.