2024 is the year of global elections, with around half of the world’s population choosing its new leaders for the next few years. It is also the first time where advanced generative AI tools will be widely available for common mortals to use during campaigns. This could obviously be very bad for democracy, hurting it even more than all the fake news that are already around. Since there is not much we can really do about this new trend in politics, we can at least reflect on it.
Most of us have seen funny videos of Biden and Trump playing Minecraft or Fortnite together. Paying attention to them and exercising critical thinking skills, I came to the conclusion that these are AI generated. But it is not the only use there is for these digital impersonations.
In January 2024, an audio message from Joe Biden was sent to New Hampshire residents, urging them not to vote in the state’s democratic primary elections. The problem is/was,this audio did not come from Biden’s vocal cords, and was not sent from his campaign team. It is considered the first case of a large scale use of AI generated voice in American politics.
While this specific case did not cause too much damage because it was promptly spotted by voice-fraud experts, it does open some worrisome perspectives. These voice generators can mimic anyone’s voice only with a one-minute sample, which means it can be done at a local scale, for example mimicking a local candidate for an election, or targeting entire demographics for example, which would simply drown the experts in too much work for them to undo the damage.
Considering social media platforms’ reluctance and inability to moderate already existing fake news as well as their difficulties in moderating non-English content, controlling these fake voices will be a challenge. And even if we find a way to systematically point them out, these AI-generated audios will still be spread, whether through private groups or simply by the fact that platforms will not delete them.
Indeed, as of today the main social media platforms do not consider audios being AI-generated as a valid reason for their deletion, but simply warn users whenever it looks like some content might be AI-generated. In the era of digital social bubbles, in which the trust in these platforms is low for a variety of reasons, this might further reduce our hope of telling truth from fabrication.
Biden’s stance on Gaza offers an interesting opportunity for reflecting on the potential uses of AI within a big election. For more than six months, the American president has been unconditionally supportive of the Israeli invasion. This position is increasingly risky for him, as his democrat electorate is torn on the question. Keeping this stance could cost him his re-election, as people who voted for him in key “switch states” in 2020 might abstain due to this position.
Currently, his administration is trying to portray him as more “neutral”, by publicizing his feeble calls for a temporary ceasefire. While this strategy is not working, as shown by the record abstention in the Minnesota primary (20% of electors voted “uncommitted” as a protest over Biden’s stance), AI generated audio messages could change this.
In an hypothetical situation, Biden’s team could use these to spread fake messages in which the president expresses a stronger support for Gaza, or a condemnation of the Israeli invasion. Social media being organized in independent and easily identifiable bubbles, it is possible that these messages would touch exactly the right audience while not even reaching others, hence earning the candidate support from both groups.
Obviously, this immoral strategy would unleash a well-deserved storm of criticism on him. But as we saw, creating these fake messages is literally a children’s game, and there would be no way to pinpoint a responsible person.
It may sound stupid said like that, and Biden’s team will surely never go down this road, but who knows again. After all, something similar as happened before: in his 2016 campaign, Trump’s campaign used the services of a firm called Cambridge Analytica to target very specific groups of people on very specific issues, sometimes sending them blatant lies. Maybe this is the future of politics, and 2024 will mark the first mass-scale use of these AI generated messages to target specific social groups. Maybe this will change how we do politics forever, or maybe it is just the logical next step in fake news, a much older problem.
In all this uncertainty, three things stand out. The first is that experts on AI got it all wrong, as they were warning us for years about the risk of fake videos and photos, but the biggest problem, in politics at least, now comes from fake voices. The second is that fake voices are already here in politics: in the Slovakian parliamentary elections of 2023, fake recordings of a candidate allegedly planning to buy votes were released days before the elections. This candidate went on to lose the elections. The two elements could be linked, at an extent we do not know. And the third is that no one is prepared for it: social media platforms have no policy against fake voices, they cannot moderate it efficiently, and the laws regulating these practices around the world are years behind.
But then where does it leave us? As we experience reality with our five senses, are we in a “crisis of reality” as some might say? Or is it simply the same old fake news that we have always seen in elections, just a bit more technology intensive this time? Who knows really, and only time will tell us.
– Alexandre
.
.
Image From Unsplash
Find us on Instagram @basis.baismag
Great article !