Menu

AI is manipulating and deceiving voters in elections. Do Citizens have a Duty to be Skeptical?

» Lesen Sie diesen Artikel auf Deutsch

Whether we call it April Fools Day or just April 1, there is a longstanding tradition in Europe and North America of deceiving our fellow human beings on that date. On April 1 they are tricked into believing false, occasionally funny, but always made-up stories. It’s the only day of the year on which we are allowed  to pull pranks at the expense of others without being punished for it. It also may be the only day of the year on which internet users wonder whether or not the social media thread they happen to be reading at the moment is genuine or phony.
 

The year 2024 is special, not merely because it is a leap year or because April 1 coincided with Easter Monday, but especially since so many elections will occur all over the world this year. In fact, the Economist Intelligence Unit (EIU), in its study, Democracy Index 2023speaks of an »election bonanza.« Voters will go to the polls in 76 countries (including all 27 EU member-states). Most elections will take place in Europe and Africa. But in addition, presidential elections with implications for international politics also will be held (or have been already) in Russia and the United States. For all those reasons, 2024 will be the biggest election year since the introduction of universal suffrage. However, not all of these elections will qualify as free and fair, nor do all the countries holding elections count as democracies.
 

That is the case because even though nearly half of the world’s population (45.4 %) inhabit a democracy of some kind, only 7.8 % live in regimes that count as »full« democracies. Thirty-nine percent of voters live under authoritarian rule. And, according to the EIU, that trend is intensifying. On one hand, the growing number of wars and international conflicts may be responsible for this drift toward authoritarian rule. But on the other hand, the EIU was able to show that it was a lot easier for governments to curtail freedoms during the corona virus pandemic than to restore them after it ended.
 

The market in disinformation is booming
 

Furthermore, the NGO Freedom House, in a study entitled Freedom on the Net 2023, determined that moments of political crisis or times in which elections were held were the true flashpoints for AI-generated material. During the period in which the study was conducted (June, 2022 to May, 2023). AI tools capable of generating images, texts, audios, or videos were used to falsify or distort politically or socially relevant information. In effect, the barriers to entry into the disinformation market were lowered for both state and non-state actors. With the help of deepfakes, ruling parties and their rivals can discredit their respective opponents, sow doubt, provoke anxiety, and either mobilize or sabotage supporters.


Trust in democratic processes is undermined.
 

Good deepfakes can undermine the public’s trust in democratic processes, motivate activists and journalists to practice self-censorship, and disrupt independent reporting. AI-generated images or videos can harden political polarization and exacerbate existing social tensions, as Freedom House warned. The example of the government of Venezuela shows that it is not only criminals, but ruling elites as well, who make use of technology to disseminate disinformation. There, the regime used AI to construct an entire video news channel.
 

The Bertelsman Foundation also conducted a study, entitled (in English) The Insecure Public. Its research revealed that 86 % of Germans are worried about the possibility that disinformation could affect the outcome of an election. And 84 % said that disinformation in the internet is a threat to democracy and social cohesion. Yet it is striking that the Second German TV Network did a factcheck on April 2 showing that while more than two-thirds of respondents saw a »great or very great danger that other people« would be influenced by disinformation, only a small proportion of them thought they themselves would be affected by it: »Whereas 70 % of respondents in Germany thought that disinformation represented a problem for other people, only 16 % saw a risk to themselves.« However, the Foundation also pointed out that U.S. citizens differed. They were supposedly more »self-reflective,« since »39 % of them worried that they themselves might be fooled by disinformation.«
But the Bertelsmann Foundation and the German government do agree on one thing: something has to be done about disinformation. Daniela Schwarzer, the Foundation’s chair, said, »This development poses a special challenge as we look ahead during the current super election year. It must be resolved to protect our liberal democracy.« In the context of the Weimar Triangle, an alliance of Germany, France, and Poland designed to confront European-level problems, the German Federal Government wants to set up an »early warning and reaction system« to monitor both the manipulation of information and malign influences from abroad. T-online quoted the Foreign Office as urging that: »in times when foreign actors are trying to undermine the fundamental values of Europe’s liberal democracy, it is important for Europe to stick together.«
 

Measures to protect democracy
 

In addition, Germany’s Minister of the Interior, Nancy Faeser, has been preoccupied with this issue ever since she submitted a package of measures to protect democracy jointly with the Federal Office for the Protection of the Constitution and the Federal Criminal Office. Those measures include a counteroffensive both against the spread of disinformation and foreign attempts to influence opinion. According to the Ministry of the Interior, autocratically-ruled countries like to set up fake accounts artificially broadening their range of influence. In addition, they sometimes invent stories based on AI images: »They use coordinated campaigns of influence to try to manipulate the free formation of opinion.« The Constitutional Protection Office is charged with trying to counteract those efforts by ensuring that certain »patterns of thought and speech do not become entrenched in our language.« Such patterns may become relevant to constitutional protection when they have the effect of delegitimizing the state, i.e., »when skepticism about the constitutional state turns into hostility toward it.« Furthermore, a »new early recognition unit of the Federal Government« will be instituted within the Interior Ministry for the purpose of identifying foreign campaigns of manipulation and influence. 
 

Ever since the announcement of the creation of that unit, the Ministry has met with a hailstorm of criticism. Critics question whether the Constitutional Protection office really is the sovereign authority when it comes to determining whether or when a statement delegitimizes the state in the sense of constitutional protection law. In addition, the minister herself has come under fire for allegedly wanting to curtail freedom of speech. Responding to such criticisms on X, the minister denied the charge: »The effort to discredit the struggle against far-right extremism by branding it as an infringement on free speech is a distortion of the facts…« 
 

When does a statement count as a delegitimation of the state under the meaning of constitutional protection law? 
 

Non-state actors also have come to the government’s rescue. At the Munich Security Conference, tech firms like Microsoft, Meta, OpenAI, and X signed on to the so-called Tech Accord to Combat Deceptive Use of AI in 2024, announcing that they would move more effectively against the use of AI to manipulate and deceive people around election time.
Those tech firms have established some key goals for themselves. As Microsoft put it in a communique, they will try to combat videos, audios, and images that simulate or modify the appearance, voice or actions of political candidates, election officials and other interest groups. That is a clear allusion to the bogus robocalls made in New Hampshire in which Joe Biden’s simulated voice called upon voters not to cast a ballot. Presumably, those calls were generated by AI software. The alliance of tech firms concluded that people should exercise their right to vote free from new kinds of AI-based manipulation.
 

As Michiko Kakutani commented in his Der Tod der Wahrheit (The Death of Truth), »false reports are of course nothing new.« And he reminded readers that »sensationalist reporting« stoked public support for the Spanish-American War, and recalled that Julius Caesar sold the conquest of Gaul as a »preventive measure.« Victor Klemperer recorded in his diary a speech made by Hitler to the Reichstag in 1942, stating that »Napoleon fought in Russia when it was 25 below, whereas he, General Hitler, was fighting when it was 45 degrees, once even when it was 52.«
 

Morally blind and deaf
 

In his book Die Angst vor den anderen (Fear of the Others), Zygmunt Bauman wrote: »It is self-evident that the internet isn’t the cause of the growing number of morally blind and deaf internauts, but it facilitates and promotes their increase considerably.« The international NGO Freedom House adopted a similar view, pointing to the potential for censorship and curtailment of freedoms in the internet that the use of AI has made possible. AI, the NGO claims, acts as a »multiplier« of repression, censorship, surveillance, and the dissemination of disinformation. Aided by AI, governments could carry out more targeted and detailed surveillance and censorship and do so more effectively.
 

A labeling obligation for AI content has gotten a warm reception in political circles
 

As far as the causes are concerned, political elites and media outlets have lost no time in pinning blame on either the algorithms themselves or »the Russians.« It’s the algorithms, they argue, that decide which messages on Facebook or X get displayed and that enclose us in information bubbles, in which only items we have clicked on before will henceforth be displayed and reported on. »The websites of social media [deliver] us information (…) that usually confirms our view of the world,« Kakutani wrote. Among other things, this would explain why Hilary Clinton’s campaign staff and large segments of the press »were so shocked by Trump’s victory in the election.« That is the reason why the tech firms’ proposal to introduce a labeling obligation for AI content has gotten such a warm reception in political circles. What is lost sight of here is that powerful interest groups behind AI  decide what the technology does or does not do. Besides, social problems cannot be solved by technology. Bauman concludes: »When the advantages of the new instruments are accepted so eagerly and used with such enthusiasm, there must have been a previously unmet need that has been seeking the means to its fulfillment.«
 

Furthermore, another trend is in play here: ascribing the origins of cyberattacks or disinformation campaigns to countries with which one does not enjoy friendly relations. That practice often involves more disadvantages than advantages. One must take care not to turn the assignment of blame for such attacks into a kind of disinformation in its own right: Gesine Dornblüth, author of the book Jenseits von Putin (Beyond Putin), puts it thus: »By ascribing everything to the Kremlin, one becomes a participant in disinformation, because the Kremlin is made to seem more powerful than it really is.« Not everything can be solved by technology: “There is…something that is still more banal: ignorance.” 
 

Speaking of ignorance, artificial intelligence is even more ignorant than humanity on one point: knowledge about European elections. The Organization known as Democracy Reporting asked chatbots like ChatGPT, Gemini, and Copilot a series of questions on the European elections. According to heise.de, none of the AIs offered »reliably trustworthy answers to questions on the European elections that voters might ask concerning the electoral process.« That is problematic and »might potentially affect the right to vote and the outcomes of referenda.« The for-pay version of ChatGPT fared the best, while Gemini was the worst, »because it proclaimed May 26 to be election day, for example.« The authors overwrote the audit report accordingly: Yes, AI does disseminate untruths about the EU election. April, April? Well, actually it should say: »June, June!« 

Go to top