Published daily by the Lowy Institute

What Taylor Swift taught the world on the risks of AI-generated images and elections

Deepfakes can carry a pernicious cost to electoral integrity, trust in institutions and democratic processes.

US singer Taylor Swift's Instagram post endorsing US Vice President and Democratic presidential candidate Kamala Harris (Pedro Ugarte/AFP via Getty Images)
US singer Taylor Swift's Instagram post endorsing US Vice President and Democratic presidential candidate Kamala Harris (Pedro Ugarte/AFP via Getty Images)

Recent news of pop star Taylor Swift’s endorsement of US presidential candidate Kamala Harris stirred up the internet – the Instagram post in which she made the announcement garnered more than eight million likes.

While Swift’s post mobilised her fanbase in support of Harris, more importantly it highlighted to the public the dangers of generative AI, in particular manipulated images. In fact, Swift said her decision was motivated by Donald Trump’s AI-generated post in August on Truth Social falsely claiming that she had endorsed his presidential run.

The post from Trump on Truth Social appeared to portray Swift and her fans endorsing him, and the post was captioned with “I accept”. However, there were visible signs of the images being generated by AI – tell-tale visual details of fabrication including glossy faces, inconsistent skin tones and frozen smiles. Viewers of the images could pinpoint and identify them as fabricated by AI.

This episode has highlighted growing concerns over the deployment of AI-generated deepfakes in election campaigning and reflects the potential for AI-fuelled disputes in elections in the years ahead.

Deepfakes refers to multimedia – images, video, and audio or a combination of the three – that has been synthetically created or manipulated. Deepfakes have been deployed for myriad purposes, such as entertainment, education and public health awareness.

However, deepfakes have been increasingly deployed during elections, such as for campaigning and promotion, further blurring the lines between reality and fiction. In January this year, an AI-generated deepfake of the late Indonesian President Suharto sparked debate on utilising AI in political influence campaigns.

Deepfakes have also been used for malicious purposes to deter voter turnout and undermine electoral integrity. Also in January, voters in the US state of New Hampshire received a fake AI-generated robocall mimicking President Joe Biden to discourage voters from coming to the polls during the primary ballot.

In February, audio deepfake recordings allegedly featuring conversations between a leading politician and a journalist went viral just days before the Slovakian parliamentary elections. The recordings may have affected the result of the election. More recently, misleading deepfake videos of Bollywood celebrities were deployed during the Indian election campaign, which culminated in June.

As countries look towards enhancing or creating legislation to combat the malicious and harmful use of deepfakes in elections, the public too has a part to play.

The risks associated with deepfakes, and AI-generated content, has led to an increase in public awareness about its capability to influence public opinion with false narratives and turbo-charge disinformation to sow discord during elections. However, the willingness and receptiveness of using AI-generated deepfakes in political campaigning looks set to mire an online information environment around the limits of AI-generated content.

The unchecked proliferation of malicious or intentional deepfakes can lead to an increase in general scepticism of all media seen online, sowing doubts about content that is genuine and well-documented as such. Increased distrust can lead people to dismiss authentic content – be it images, audio or video – as fake. The mere notion of authenticity or where content cannot be conclusively verified can potentially lead to dire consequences (e.g. misinformation regarding an alleged deepfake resulted in an attempted coup in Gabon).

Countries globally are stepping up to try to mitigate the consequences of deepfakes for elections.

In the United States, while there are no federal rules at present regulating how political campaigns can deploy AI, some states have implemented legislation banning the use of deepfakes in state elections, for instance by banning robocalls and manipulated advertisements during campaigning. The Federal Elections Commission issued a statement that it would not vote on a proposed rule about deepfakes and AI ahead of the US presidential election this year.

In Asia, countries have also taken measures to tackle the risks of deepfakes in elections by updating legislation. South Korea’s revision to the Public Official Election Act bans AI-generated deepfakes 90 days prior to an election. Singapore has proposed a law banning deepfakes and other digitally manipulated content of candidates during elections in the country – the Elections (Integrity of Online Advertising) (Amendment) Bill, if passed, would prohibit digitally generated or manipulated content that realistically depicts a candidate saying or doing something that they did not say or do.

Rapid advancements in generative AI, such as OpenAI’s “Strawberry” model, mean that the potential influence of deepfakes and other AI-generated content on public perceptions and opinions during election periods has yet to be fully understood.

As countries look towards enhancing or creating legislation to combat the malicious and harmful use of deepfakes in elections, the public too has a part to play. While we may not have the star power of Taylor Swift, we can do our best to make sure that we act to verify content by checking sources, intentions and context to help towards building up our information and media literacy.




You may also be interested in