2024 is world’s biggest election year ever – and AI experts say we’re not prepared | Science & Tech News

The world is unprepared for the impact of AI on a historic year of elections, experts have warned.

More than two billion people across 50 countries could head to the polls in 2024, which is a record high.

The UK is set to hold a general election, the US has a presidential election in November, and the world’s most populous country, India, will vote for its next prime minister.

It will be the first time some of the world’s biggest democracies have held a national vote since generative AI tools, including ChatGPT and image creators like Midjourney, went mainstream.

Martina Larkin, chief executive of Project Liberty, a non-profit seeking to promote internet safety, warned politicians were “at the top of the pyramid” when it comes to AI-driven misinformation.

Deepfakes, where high-profile figures are digitally cloned in realistic videos, are of particular concern.

US President Joe Biden and Ukraine’s Volodymyr Zelenskyy have repeatedly fallen victim to such clips, while Labour leader Sir Keir Starmer‘s voice was cloned for nefarious purposes.

Ms Larkin said such misinformation could spread “at a much bigger scale” in the run-up to 2024’s elections.

Please use Chrome browser for a more accessible video player

Deepfake audio of Starmer released

UK government taking threat ‘very seriously’

Governments are considering how to regulate the technology, but some are moving faster than others.

Mr Biden unveiled proposals in October, which included mandating that AI-generated content be watermarked.

The EU has reached a deal on how to regulate AI, though it won’t take effect until 2025 at the earliest. The bloc holds parliamentary elections next year.

In the UK, the government has been cautious about the need for regulation, fearing it would stifle innovation.

Fact-checkers have called on the government to boost public awareness of the dangers of AI fakes to help them recognise fake images and question what they see online.

A government spokesperson said it took the threat of digitally manipulated content “very seriously”.

“We are working extensively across government to ensure we are ready to rapidly respond to any threats to our democratic processes, through our Defending Democracy Taskforce and dedicated government teams,” they said.

“Our Online Safety Act goes further by putting new requirements on social platforms to swiftly remove illegal misinformation and disinformation – including where it is AI-generated – as soon as they become aware of it.”

Under the act, media regulator Ofcom is tasked with helping to raise public awareness of misinformation online.

Social media companies are also legally required to take action against misinformation and disinformation where it amounts to a criminal offence, or risk a fine.

And the Elections Act requires anyone running political advertising, including AI-generated material, to include an imprint with their name and address.

Shivajee Samdarshi, chief product officer at cybersecurity firm Venafi, said regulation could only go so far without an agreed international approach.

“Think about bad actors in Russia or China – they don’t care about these guidelines anyway,” he said.

He warned AI-generated content was “completely knocking the foundation of trust” – and could have an even more significant impact on elections than social media.

Companies like Meta faced criticism for not doing enough to combat fake news during the 2016 US election and Brexit referendum, and like governments are under pressure to introduce guardrails.

Please use Chrome browser for a more accessible video player

‘AI will threaten our democracy’

How to protect yourself from AI fakes

Kunal Anand, who used to run security for once dominant social media site MySpace, said a combination of generative AI, bots, and social media could “accelerate false narratives” like never before.

Now of Imperva, he said platforms have a “responsibility” to take down fake content – but urged voters to prepare themselves too.

“People need to verify what they see, more than ever,” he said.

“It’s not easy to detect deepfakes. But if something looks questionable, verify it.

“Be aware of confirmation bias and diversify your news sources.

“And go and play with these generative AI tools, not just for writing content but with image and video generation.

“It will give you a sense of what these tools are and how they work.”