- The growing role of AI in politics
- Tsunami of disinformation: the dark side of AI
- Regulatory frameworks struggle to keep up with AI’s rapid evolution
Technology has taken on an increasingly prominent role in political campaigning over the years. Starting with the advent of radio and television broadcasts, then advancing to the internet and social media, technology has continually reshaped the way candidates communicate with voters, how campaigns manage and analyse data, and even how electoral strategies are formulated. Today, we stand on the precipice of another monumental shift, as the political arena prepares to welcome a new powerful force into its midst — artificial intelligence (AI). This innovative technology promises to revolutionise political campaigning, offering a new level of precision and efficiency in campaign management, voter targeting, and policy development. However, as AI continues to permeate the political world, it also brings with it a host of ethical, legal, and societal challenges. As we embrace the power of machine intelligence in politics, it is critical to also consider the potential implications and risks it may pose to our democratic processes. In this article, we will delve into the real-world applications of AI in the political sphere, shedding light on both its potential benefits and the associated challenges that come with this new digital era.
“It is vitally important for every voter to be able to have an answer to their question and not simply rely on a moderator to hopefully ask it for them”.
Governor Asa Hutchinson, Republican candidate for President
The growing role of AI in politics
During his run for the Republican presidential nomination earlier this year, Miami Mayor Francis Suarez launched an AI-powered chatbot designed to answer questions from voters relating to the Mayor’s agenda. Built using VideoAsk, an interactive video platform developed by the software company Typeform, the chatbot analyses questions submitted in spoken language and instantly delivers the most appropriate response. However, rather than answering the voter’s question directly, the chatbot directs them to one of the pre-made videos featuring an AI-generated avatar of Suarez delivering a speech that is the closest match to the question posed. Unsurprisingly, this approach produced mixed results, with some answers that proved to be fairly on point, while others fell wide off the mark. In some cases, the chatbot was unable to answer the question altogether, delivering an ‘error’ message instead.
Soon after, another Republican candidate for President took a similar approach. After failing to meet the requirements for participation in the second Republican presidential candidate debate, Governor Asa Hutchinson decided to launch an AI chatbot of his own that would enable voters to get answers to a wide range of policy-related questions and provide insights into the Governor’s stance on those issues. “It is vitally important for every voter to be able to have an answer to their question and not simply rely on a moderator to hopefully ask it for them. That is why I am excited to unveil our Ask Asa platform this afternoon. This tool will allow voters to ask questions on their most important issues and get a response back from me”, said Hutchinson. While it hasn’t been disclosed which AI model the chatbot is based on, its training dataset included Hutchinson’s past remarks, interviews, and speeches, allowing it to provide fairly accurate responses to voters’ questions.
The AI startup Delphi then decided to take this concept one step further by releasing Chat2024, a new platform that will enable voters to interact with the AI-powered avatars of 17 leading presidential candidates, including Joe Biden, Donald Trump, Cornel West, Robert F. Kennedy Jr., and Vivek Ramaswamy. Voters will be able to either chat with an avatar one-on-one or ask them all to reply to the same question at the same time. They will also have the option to pick two avatars and pit them against one another in a debate. In order to train the avatars, the Delphi team first had to manually compile information pertaining to each candidate, including videos of their public appearances, media reports, and their own writings. Whenever a voter asks a question, avatars will use AI to analyse it, determine the voter’s intent, provide context, and offer a response. It’s important to note that avatars don’t rely on a single AI model to generate their output but on multiple ones, including those developed by OpenAI, Anthropic, and Hugging Face. The response is delivered in the form of text and audio, offering a fairly accurate representation of their real-life counterparts.
Of course, American politicians are not the only ones to employ artificial intelligence in their campaigns. In Romania, prime minister Nicolae Ciuca recently announced the addition of a new honorary advisor to his team — an AI assistant called Ion. The new assistant’s role will be to “capture opinions in society” by gathering information from social media. This information will then be used to provide the government with valuable insights into what the Romanian public wants. The government is encouraging the public to engage with Ion and share their views on a wide range of topics, including government activity, the standard of living, healthcare, sports and entertainment, and other matters of national importance. In addition to interacting with Ion via the website and social media, the citizens will also be able to do so at public events, where the assistant will appear in the shape of a full-length mirror. However, unlike the other AIs on this list, Ion is not a chatbot, so it won’t be able to talk back. Instead, it will collect contributions from the public, analyse them, and compile a report for government officials to give them a better idea of the public’s needs and their satisfaction levels.
“We enter this world where anything can be fake — any image, any audio, any video, any piece of text. Nothing has to be real”.
Hany Farid, digital forensic expert and professor at the University of California, Berkeley
Tsunami of disinformation: the dark side of AI
As the use of artificial intelligence in political campaigns continues to grow, a number of experts have voiced their concerns that the technology could potentially be used to spread disinformation and discredit opposing candidates by generating false images, videos, and audio. Some even believe that there is a real risk that this fake information could influence the outcome of future elections. “Disinformation is going to be a huge problem in the campaign, and AI brings very powerful tools down to the level of almost anyone”, says Darrell West, a senior fellow at the Brookings Institution Center for Technology Innovation. “There’s a risk we’re going to see a tsunami of disinformation in the campaign, and it’s going to be hard for people to distinguish the real from the fake”. While politicians have indeed employed similar tactics in the past, the advent of generative AI has made it easier than ever to produce believable fake content. In fact, it’s not some distant reality we are talking about here; this is happening as we speak.
Following President Joe Biden’s announcement that he would seek reelection in 2024, the Republican National Committee released a 30-second, AI-generated video that shows a dystopian United States, with migrants flooding the country’s borders and armed soldiers patrolling the streets of cities under lockdown, suggesting that this is the future that will come to pass should Biden be voted President again. Soon after that, Governor Ron DeSantis’ campaign published AI-generated images showing former President Donald Trump embracing and kissing infectious disease expert Dr Anthony Fauci. Over in the UK, an audio clip was released in which opposition leader Keir Starmer appeared to use abusive language with his staffers. While the audio was soon proven to be fake, it was still heard by more than 1.5 million people, with no way to tell how many actually believed it. Similar incidents were reported in election campaigns in Poland and Slovakia as well. “I think we are at the tip of the iceberg, which in many ways is frightening and fascinating and dangerous all at once”, says Isaac Goldberg, a Democratic political consultant and campaign strategist. “I think whatever we thought of as fake news or whatever we thought our definition of fake news or fake content was a couple of years ago will be laughable when we look back at the 2024 elections”.
One of the biggest problems associated with the use of AI-generated content in political campaigns is that it promotes a concept known as the liar’s dividend, which refers to the increasing ability of public figures to cast doubt on real events by simply claiming they were faked. “We enter this world where anything can be fake — any image, any audio, any video, any piece of text. Nothing has to be real”, says Hany Farid, a digital forensic expert and professor at the University of California, Berkeley. As generative AI continues to develop and becomes capable of producing ever more realistic content, we may no longer be able to believe even our very own eyes and ears. In fact, this may happen sooner than anyone anticipated. “I think we’re in potentially the last days of where we have any confidence in the veracity of what we see digitally”, warns Russell Wald, the policy director at Stanford University’s Institute for Human Centred AI. To show that such concerns are not unfounded, the European Union’s cybersecurity agency ENISA recently published its annual Threat Landscape report, which says that AI chatbots and deepfake images and videos pose a serious risk to the integrity of next year’s European Parliament election.
Regulatory frameworks struggle to keep up with AI’s rapid evolution
This issue is further compounded by the fact that lawmakers are still trying to catch up with the breakneck pace of development of artificial intelligence technology. In the US, for example, there are currently no federal rules preventing the use of AI-generated content in political campaigns. While the Federal Election Commission does have certain rules for TV and radio advertising, these don’t apply to online content. “Why can’t accurate campaign advertising follow our government’s own Federal Trade Commission regulations that don’t allow corporations to issue false, deceptive or misleading ads when it comes to commercial products and services?”, asks Wendy Melillo Farrill, a historian of advertising and professor at American University’s School of Communication. “Why don’t we have that same type of federal rulemaking apply to campaign ads?” A similar problem is present in the UK, where no existing regulator has the power to prevent electoral disinformation. While the country does have an Electoral Commission, its sole concern is campaign finance, not ads. The Digital Regulation Cooperation Forum (DRCF), which could conceivably tackle AI-generated disinformation, has no statutory powers, while the Advertising Standards Agency (ASA) has no jurisdiction over campaign ads. However, as the awareness about the dangers of AI increases, this may be about to change. President Biden recently signed an executive order that aims to tackle safety and security concerns surrounding AI technology, while a bipartisan group of senators introduced new legislation to address transparency and accountability for AI systems. Furthermore, Senate Democrats introduced a bill that requires parties to add a disclaimer to any political ads that were created with the help of AI. And the UK recently introduced the Online Safety Bill, which empowers Ofcom, the UK’s communications regulator, to ensure that online platforms protect their users from harmful content. However, critics have pointed out that the bill only enables Ofcom to check whether platforms are following their own policies that relate to disinformation but that it can’t actually force them to take action against such content.
Closing thoughts
While AI promises to bring unprecedented levels of efficiency and precision in campaign management and voter engagement, it also comes with significant ethical, legal, and societal implications, primarily centred around the propagation of AI-generated disinformation and the manipulation of public opinion. With the current regulatory framework struggling to keep pace with the rapid evolution of AI, we are left with a dangerous vacuum that could be exploited to undermine democratic processes. Could we be ushering in an era where our very perception of truth is at stake, where distinguishing the real from the fake becomes an insurmountable task? As we embrace these advanced technologies, the question we must ask ourselves is: How can we ensure that AI serves to enhance, rather than undermine, the integrity of our democratic processes?