This article is part of the E-engaged project funded by the European Commission under the CERV grant. It was originally posted to the project website: https://engaged.altervista.org/the-use-of-ai-in-electoral-campaigns-and-its-danger-for-democracy/
The Polish opposition party Civil Platform (Platforma Obywatelska, PO) used deepfake techniques in a video imitating the voice of Prime Minister Mateusz Morawiecki. Shared on the social media platform X (formerly Twitter) on the 24th of August, the video shows a highly controversial use of the artificial intelligence segment called generative AI.
With the election campaign in full swing, the PO and its leader Donald Tusk have entered a new stage of political battles: they have used AI software to mimic the Prime Minister’s voice “reading” texts he had written. Even if the sentences were Morawiecki’s, they were taken out of context. However, there was no mention of the fact that generative AI was used in the production of the video.
Unfortunately, this is just another case of misusing AI to manipulate public opinion. It exposes the regulatory weakness on regulating the use of AI and the responsibility of big media platforms in helping spread fake news.
The legal requirement of mentioning the use of AI software and techniques will be included in the AI Act. This regulation is expected to go through the European Parliament in 2025 as meetings between the Parliament, Council and Commission started this summer. At the moment, the European Union (EU) is still to give precise guidelines on the use of AI and its regulations.
This first regulatory framework has been proposed by the European Commission in April 2021. It stated that the regulations will be different according to how much the users risk by being exposed to different AI systems. On generative AI, the Commission has so far agreed on three rules: an obligation to mention that the content was generated by artificial intelligence, the need to design the model to prevent it from “generating illegal content” and the obligation to publish summaries of copyrighted data.
The latest developments on the AI Act propose four categories of risks: users deemed unacceptable would be banned from using AI altogether, AI used in infrastructure such as electricity or recruitment would be classified as “high risk” and subjected to compliance rules. Deepfake such as the one by PO would be subjected to transparency rules and would be legally required to mention that AI was used to create the content. Other uses of AI would be mainly unregulated as they are deemed as having lesser risks.
Before the AI Act is implemented, fake news and deep fake stories like those by PO will continue to influence the public. Meanwhile, it is important for citizens to be aware of misinformation like this while journalists continue to fact-check and debunk fake news.
Comentarios