Creativity, confusion and controversy have defined the introductory stages of artificial intelligence integration into our society. When it comes to political campaigns and the upcoming 2024 election, this combination is changing the way politicians sway public opinion.
In June 2023, presidential candidate and Florida governor Ron DeSantis’ campaign used AI to generate images of his opponent, former president Donald Trump, with Anthony Fauci, a premier target of the Republican party base for his response to the COVID-19 pandemic.
The video, posted on X, displayed a collection of images of Trump and Fauci together. Some are real photographs, but three are AI-generated photos of the two embracing.
Lawmakers fear the use of deceiving AI images could potentially cause some voters to steer away from candidates in 2024.
“There are two ways politicians are using it,” said Dr. Yelena Yesha, UM professor and Knight Foundation Endowed Chair of Data Science and AI. “One is biasness, trying to skew information and change the sentiments of populations, and the other is the opposite effect, using blockchain technology that will control misinformation.”
Conversations about regulating the dangers of AI have already begun circulating on Capitol Hill, starting with the U.S. Senate hearing on May 16, 2023. The hearing included Sam Altman, CEO of OpenAI, who expressed concern of potential manipulation of his company’s technology to target voters.
The most notable OpenAI technology is ChatGPT, which has seen the most rapid user consumption rate in internet history, surpassing the success of applications like TikTok and Instagram in its first two months.
The platform initially banned political campaigns from using the chatbot, but its enforcement of the ban has since been limited.
An analysis by The Washington Post found that ChatGPT can bypass its campaign restriction ban when prompted to create a persuasive message that targets a specific voter demographic.
“AI will certainly be used to generate campaign content,” said UM professor of political science Casey Klofstad. “Some will use it to create ‘deepfakes’ to support false narratives. Whether this misinformation will influence voters is an open question.”
Deep fakes, an enhanced form of AI that alters photo and video, has reached the political mainstream. Following President Biden’s re-election announcement last April, the Republican National Committee (RNC) released a fully AI-generated ad depicting a fictional and dystopian society if Biden is re-elected in 2024.
Congress has furthered its efforts in establishing boundaries for AI, with Senate Majority Leader Chuck Shumer (D-NY) recently leading a closed-door meeting on Sept. 13 with high-profile tech leaders, including Elon Musk and Mark Zuckerberg.
The goal of this meeting was to gather information on how prominent big tech platforms could enforce oversight within the use of AI. Senate sessions on the matter will continue throughout the fall, with Schumer hopeful for bipartisan support and legislation across Congress.
“I would be reluctant to see the government take a heavy hand in regulating AI, but policy could be tailored more narrowly to incentivize AI developers to inform consumers about the source and validity of AI-generated content,” Klofstad said.
The extent to which the federal government can have major influence over regulating AI is unclear as artificial intelligence continues to develop.
“It should be regulated, but it should not to the point where the progress can be slowed down by regulatory processes,” Yesha said. “If you have too much regulation, it may at a certain point decelerate science and the adoption of innovation.”
A significant reason for AI regulation efforts stems from the anticipation of foreign influence in our elections. Russian-led misinformation campaigns played a part in the 2016 election, and elected officials foresee advancement of foreign meddling in tandem with AI’s improvement.
“At a certain point, as AI becomes more developed, if it falls in the wrong hands of totalitarian regimes or autocratic governments, it can have a negative effect on our homeland.” Yesha said.
However, AI’s applications do provide numerous benefits for political campaigns.
A prominent benefit of AI in the political arena is its messaging capabilities. With a chatbot’s ability to instantly regurgitate personalized messages when fed consumer data, essentially taking over the work of lower-level campaign staff, the ability to garner donor support is vastly expanded.
“Campaigns have always adapted to new modes of communication, from the printing press, to electronic mailing lists, to websites, text messaging and social media.” Klofstad said. “I expect AI will not be different in this regard.”