top of page

Lessons from the pre-campaign: The challenge of combating deepfakes in the 2024 elections in Brazil



The proliferation of deepfakes will be difficult to measure


DFRLab and NetLab UFRJ


For the first time, Brazil will hold elections governed by electoral legislation that stipulates strict rules for the use of Artificial Intelligence (AI) and prohibits the dissemination of so-called deepfakes, content that modifies a person's body, face or speech, attributing to them a statement, attitude or event that did not occur. The Digital Research Forensic Lab (DFRLab), of the Atlantic Council, and NetLab UFRJ, of the Federal University of Rio de Janeiro, tested different methodologies to identify electoral deepfakes during the pre-campaign period, between June and August 2024, and found a series of difficulties in collecting data and recognizing synthetic content on social networks and messaging applications.


The new electoral rules were approved in February by the Superior Electoral Court (TSE), the body responsible for organizing and monitoring the election in Brazil, and are in effect for the elections on October 6, 2024, when approximately 155 million Brazilians will go to the polls to elect mayors and city councilors in the country's 5,568 municipalities.


The use of artificial intelligence to produce or amplify disinformation content is one of the TSE's main concerns in these elections. Therefore, the resolution determines that any content created or edited by artificial intelligence must contain a warning that the material has received some AI treatment. The resolution also provides for severe penalties when AI resources are used to produce fake content and deepfakes.


According to the text of the resolution: “The use of synthetic content in audio, video or a combination of both, which has been digitally generated or manipulated, even with authorization, to create, replace or alter the image or voice of a living, deceased or fictitious person (deepfake) to harm or favor a candidacy is prohibited.” In these cases, the candidate who used the strategy will have his/her registration or mandate revoked. For more details on the resolution, access the analysis by DFRLab and NetLab UFRJ published on May 29.


Agreement with platforms


The monitoring of the use of Artificial Intelligence in the 2024 elections fundamentally depends on the technical capacity to identify and archive synthetic media for later evaluation of the content. In August, the TSE signed memorandums of understanding with Meta, TikTok, LinkedIn, Kwai, X, Google and Telegram, in which the companies committed to act quickly and in partnership with Brazilian authorities to remove disinformation content. The agreements are valid until December 31, 2024.


In general, the platforms are notified by the Integrated Center for Combating Disinformation and Defending Democracy (CIEDDE) of the occurrence of complaints related to cases of disinformation or irregular use of AI, in addition to other content that violates electoral laws. CIEDDE is a body composed of representatives from the TSE and six other Brazilian public institutions, such as the National Telecommunications Agency (Anatel), the Federal Public Prosecutor's Office (MPF), the Ministry of Justice and Public Security (MJSP) and the Federal Police (PF). Opened in May, the center aims to centralize all complaints in Brazilian territory and forward them to the platforms.


However, these agreements do not deal with technical tools for the automated detection or suspension of deepfakes. They only address the obligations of the platforms in response to complaints received by CIEDDE.


In the agreement with Meta, it was established that the screening and initial examination of complaints are the responsibility of the TSE, which also undertakes to notify the company extrajudicially of each new complaint received. However, the agreement clarifies that receiving complaints does not obligate the company to take any action - such as removing content - that is not in line with its policies.


The agreement with Google follows the same path, talking about cooperation to act in a coordinated, fast and effective manner in confronting the dissemination of misinformation content. As announced at the end of April, Google decided to prohibit the publication of political content starting May 1, claiming it does not have the technical capacity to comply with the requirement of Brazilian electoral law to maintain a repository of political ads.


In July, NetLab UFRJ published the technical note “Google reduces transparency of political ads in Brazil and disobeys TSE resolution”, in which it shows that Google's measure not only fails to curb the publication of political ads but also makes it even more difficult to monitor the practice.


In addition to the agreements with the TSE, these companies signed the voluntary Electoral Agreement on AI, which has seven main objectives to define expectations on how signatories manage the risks arising from misleading electoral content from AI.


Monitoring by DFRLab and NetLab UFRJ


With the aim of exploring research methods to identify deepfakes in the electoral context, DFRLab and NetLab UFRJ carried out joint monitoring of social networks, messaging applications and search engines between the months of June and August, a period that corresponds to the so-called pre-election campaign. Since deepfakes intended to generate misinformation or harm an opponent are not declared as such by their own producers, the monitoring sought to identify discussions or complaints from users about political deepfakes, considering that many of the complaints received by the TSE probably arise from these situations.


This method, however, reveals an inherent limitation not only for researchers, but also for the TSE itself: well-produced deepfakes or other AI-generated content may not be recognized as false by an average internet user, reducing the likelihood of a complaint. There may also be cases in which people recognize that the content is misleading, but do not use relevant keywords that would help researchers or monitoring agencies locate the synthetic content. For the TSE, there is the additional challenge of not being able to guarantee that internet users who identify potential violations of electoral rules will actually officially report these cases.


The following platforms were considered for monitoring social networks: X, Facebook, Instagram and YouTube. To this end, a database was created with official profiles of all pre-candidates for Mayor of the capitals of Brazil, totaling 815 URLs monitored by the Junkipedia tool.


Searches were conducted, in Portuguese, for the keywords “deepfake”, “deep fake” and “artificial intelligence” in 58,271 posts made by the pre-candidates between June 1 and August 15.


The search for “deepfake” and “deep fake” returned only one result: a video posted on July 1 on Instagram by federal deputy and then pre-candidate for Mayor of São Paulo Kim Kataguiri. The images simulate President Luiz Inácio Lula da Silva running through the streets and fleeing from a group of men. The video contains the text: “When Lula goes out alone on the street after knowing he won’t be shot, only chicken feet.”


In the post, Kataguiri stated that the content was a deepfake: “Run Lula!! Hahaha Note: I warn you that it is a deepfake before you accuse me of fake news.”


Deepfake video published on July 1 by then-candidate for Mayor of São Paulo Kim Kataguiri. (Source: Kim Kataguiri/Instagram)

Although the video could be considered a parody, Brazilian electoral regulations do not distinguish between types of deepfakes and require candidates to inform the public about all content created or edited by AI. In this specific case, the candidate complied with this requirement.


A search for “artificial intelligence” identified 45 posts. Most of them were content related to the candidates’ proposals to use artificial intelligence to optimize processes in municipal management or simply news reports on the application of AI and discussions about technology.


A post containing the term “artificial intelligence” referred to a potential case of deepfake. On July 24, the candidate for Mayor of Macapá, Dr. Furlan, published a video on Instagram alerting his followers that he had been the victim of audios that simulated his voice and were being intentionally shared to harm his reputation.


Video posted by candidate Dr. Furlan denouncing an alleged case of deepfake simulating his voice. (Source: Dr.Furlan/Instagram)

Another post, also citing the term “artificial intelligence,” referred to a video edited with artificial intelligence to create a meme of Brazil’s Finance Minister, Fernando Haddad. The video, which can be considered a deepfake because it reproduces the minister’s face on a character from the film Gladiator (2000), was published on July 19 on Instagram and Facebook by the candidate for vice mayor of Curitiba, Paulo Martins. In the post, the candidate wrote: “someone needs to stop artificial intelligence,” also informing his followers that the content was manipulated by AI.



Video edited with artificial intelligence as a meme of Brazil's Finance Minister, Fernando Haddad. (Source: Paulo Martins/Instagram)

Messaging apps


To monitor messaging apps, WhatsApp and Telegram were considered the main platforms. On WhatsApp, 1,588 public groups and channels for discussions about politics were monitored, covering a universe of 47,851 users. On Telegram, 854 groups were monitored, with 76,064 users.


To collect and analyze data from Telegram, the API provided by the platform itself was used. The histories of the monitored groups were exported through Telethon, a Python library designed for use with Telegram, and later analyzed by the researchers.


To collect data from WhatsApp groups, another methodology was used, since the platform does not provide an API. The groups were found through search engines or on specialized websites, and added to a WhatsApp account linked to a valid cell phone number. The messages exchanged in these groups underwent the process of removing aggregated personal data and were stored in an SQLite database for analysis by the researchers.


Initially, searches were conducted using the keywords “deepfake” or “deep fake”. However, on both platforms, the results were irrelevant to the scope of the monitoring. Some messages alluded to cases of deepfake use in other countries, some erroneously reported legitimate content as deepfake, and others warned users about the dangers of deepfake. No electoral content of this type was found.


Due to the scarcity of results, other tests were conducted on both apps, looking for reports of AI use. Searches for expressions such as “made by AI”, “made by Artificial Intelligence”, “generated by AI”, “generated by Artificial Intelligence”, “manipulated by AI” and “manipulated by Artificial Intelligence” (and variations of gender and number) returned results, but none of them were relevant to the scope of the monitoring again.


Meta’s Platforms


Meta’s Ad Library, which serves as a repository for boosted content on Facebook, Instagram, and Messenger, was also monitored from June to August.


In May 2024, Meta committed to placing a warning on its platforms for all AI-generated or significantly manipulated content. Enforcement of the law, however, has been inconsistent. During the first round of the European parliamentary elections in June 2024, the DFRLab identified AI-generated content circulated on Facebook by the French affiliate of the far-right coalition known as Identity and Democracy (ID). The following month, POLITICO reported on the circulation of AI-generated content on Meta’s platforms ahead of the second round of the French parliamentary elections. Furthermore, Meta’s Ad Library and API also do not provide filters for AI-generated content that would allow researchers to collect data and identify synthetic content. CrowdTangle, Meta’s transparency tool that closed in August, did not provide this capability either. In the case of political, electoral or politically themed ads, Meta has required advertisers to include the synthetic content label since January 2024.


BOX


The company also states that the label is displayed on ads displayed to users and on the ad details page of the platform's ad library. However, it was not possible to find ways to filter for this content systematically.


To make matters worse, unlike what happens in messaging apps, problematic content in Meta Ads cannot be flagged by users, which creates a major challenge in finding this type of material.


Press outlets or blogs


Finally, Google Alerts was used to monitor the publication of content on blogs and websites that mentioned the following keywords: “artificial intelligence”, “AI”, “deepfake”, “elections”, “candidate”, “female candidate”.


Monitoring via Google Alerts identified 16 deepfake cases reported in news outlets or blogs in the states of Amazonas, Rio Grande do Sul, Sergipe, São Paulo, Mato Grosso do Sul, Pernambuco, Paraná, Rio Grande do Norte, and Maranhão. According to the reports, most of this content circulated in WhatsApp groups and was identified by the victim themselves. In nine cases, the victims took legal action to remove the content and identify those responsible for the posts.


For example, on August 6, a case emerged in Igarapé do Meio, a city located in the state of Maranhão. A deepfake video accused the local government of financial fraud in the public education system, manipulating genuine images published in January 2024 by Fantátisco, on TV Globo.


The original images referred to financial crimes committed by public authorities in the cities of Turiaçu, São Bernardo, and São José de Ribamar, all located in the state of Maranhão. The deepfake video may have been created with the intention of damaging the reputation of Almeida Sousa, mayor of Igarapé do Meio, who is supporting the candidacy of his wife for Mayor of Santa Inês, Solange Almeida.



Screenshot of deepfake video falsely accusing the mayor of Igarapé do Meio of financial fraud. (Source: Youtube)

Another case emerged on the same day in Mirassol, a city in the state of São Paulo. A video incorporated a false narration simulating the voice of President Lula. The video invited the local population to an electoral event in support of candidate Ricci Junior, despite the fact that he is from another political party. According to Junior's team, the video circulated in WhatsApp groups. Mirassol Conectada, which reported on the video, also noted that the municipal electoral court imposed a daily fine of R$2,000 – approximately US$368 – for anyone who continued to circulate it.


Conclusion


As demonstrated in this analysis, there are limitations in the research methodologies available to monitor the proliferation of AI-generated images at scale.


The methodologies tested showed that it was easier to identify deepfake cases through keyword searches, which generally revealed content in posts by pre-candidates, discussions by internet users and for press reports or ongoing legal cases.


The methodology of using Google Alerts has proven efficient for collecting data on deepfakes in the electoral context, but it is limited to cases that were already in the public domain, covered by the press or under judicial review.


The search for reports from users of social networks and messaging apps indicating the occurrence of deepfakes has proven promising. However, the difficulties in collecting information on the platforms themselves hinder the systematization of data. Many of them do not allow, for example, the collection of comments, where possible reports from users may be found - and alternative techniques are computationally very expensive. In addition, none of the platforms analyzed offers the possibility of collecting data based on the label that indicates the use of artificial intelligence.


Another aggravating factor in the methodological difficulties is the possible underestimation of data collection, considering that it is difficult to quantify how much of the content displayed on the networks is synthetic. Due to the limitations of the platforms, in order to identify a deepfake as quickly as possible and prevent its use as a disinformation weapon in the electoral context, constant monitoring - and sometimes manual - is necessary, either by voters and candidates, or by qualified professionals capable of recognizing potential synthetic content in order to report it.


A potential solution to contain disinformation generated by synthetic content would be to create a direct channel to send content to fact-checkers, a method that has been effective in similar situations. For example, in Brazil, some fact-checkers, such as Agência Lupa, have WhatsApp accounts where the public can send content circulating on messaging apps and social media platforms for verification. A similar approach could be successful in tracking deepfakes.


Furthermore, it would be beneficial to improve the internal search systems of platforms and social networks, allowing for advanced filters to locate AI-generated content or integration with tools available on the market that already provide an opinion on the origin of the content, such as TrueMedia, which is currently free and open to everyone.


Along with the methodological difficulties, a hypothesis that should also be considered to explain the scarcity of synthetic content identified during the monitoring is the precocity of the research, since the campaign officially began on August 16.


Lessons learned in Brazil and beyond


The TSE resolution was an important step towards regulating the use of Artificial Intelligence in political advertising, but the novelty of the tool and the lack of experience with its use in electoral contexts make the 2024 elections an important milestone in regulation. Brazilian electoral authorities should analyze the events of these elections to improve the rules for the next elections. In this sense, the requirement for a label that identifies the use of AI in each advertising piece that uses the technology should be improved, stipulating that this label be accessible and useful for research purposes. In other words, platforms should make it possible for at least ads to be systematically collectible based on the label of the use of Artificial Intelligence.


The rules should also require platforms to advance in the governance of the use of Artificial Intelligence, since self-declaration, where the user is responsible for indicating the use of AI, is not sufficient, and algorithmic review techniques are not effective.


The European Union regulatory initiative, known as the Artificial Intelligence Act, offers interesting learning opportunities. The bloc’s other legislation, known as the Digital Services Act, also has provisions that explicitly relate to detecting and mitigating AI content, especially in the run-up to elections.


The AI ​​Act requires platforms to “identify and mitigate systemic risks that may arise from the dissemination of artificially generated or manipulated content, in particular the risk of actual or foreseeable negative effects on democratic processes, public debate and electoral processes, including through disinformation.”


While the AI ​​Act was not in force in the recent European Parliament elections, platforms that are under the strictest oversight through the Digital Services Act have also committed to identifying, labelling and sometimes removing harmful artificial content, although external research has shown that many AI-generated images, including those used in political propaganda, have not been labelled.


Going a step further, the legislation also addresses targeting mechanisms, requiring platforms to disclose whether their systems for targeting and targeting political ads are influenced by AI models. The March 2024 edition of the Official Journal of the European Union states that the law requires “to provide, together with the indication that the advertisement is politically oriented, additional information necessary to enable the data subject to understand the underlying logic and the main parameters of the techniques used, in particular whether an artificial intelligence system was used for the targeting or distribution of the politically oriented advertisement, as well as any additional analytical techniques.” However, these provisions will not come into force until 2026, making it too early to assess their effectiveness.


This article was published as part of a collaboration between DFRLab and NetLab UFRJ, which published a version of this article in Portuguese. Both organizations are monitoring the use of AI tools during the 2024 Brazilian municipal elections to better understand their impact on democratic processes.

bottom of page