Press "Enter" to skip to content

How AI was Used to Manipulate Rwanda’s Elections

Spread the love

Kagame’s Digital Manipulation Tactics Exposed


In late May 2024, a series of reports titled “Rwanda Classified” was released by Forbidden Stories, a network of investigative journalists. These reports shed light on the suspicious death of Rwandan journalist and government critic John Williams Ntwali and revealed Kigali’s attempts to silence dissenting voices.

Our investigation, conducted by the Media Forensics Hub, uncovered evidence of a massive online manipulation effort following the release of these reports. We found at least 464 social media accounts that were used to flood discussions with pro-Paul Kagame content. These accounts have been active on X/Twitter since January 2024 and have produced over 650,000 messages.

As Rwanda approaches its July 15, 2024 election, the result seems predetermined. This outcome is influenced by the expulsion of opposition candidates, harassment of journalists, and the assassination of critics. In the last election in 2017, Kagame won with over 98% of the vote. Despite the foregone conclusion, the pro-Kagame network is still active in promoting his candidacy online. These inauthentic posts are likely intended to bolster the perception of Kagame’s popularity and legitimize the election results.

Our findings indicate that AI tools, including Large Language Model ChatGPT, are being used to manipulate online discussions and push government narratives. The use of AI in these operations marks a troubling development, as it enhances the sophistication of methods used to control public perception and maintain power.

Generative AI enables the creation of a vast amount of varied content, making it easier to flood online discussions compared to purely human-operated campaigns. The consistent posting patterns and content markers made the network detectable, but future campaigns might refine these techniques, making them harder to identify.

Coordinated influence operations have become common in African digital spaces. These networks aim to make fake content appear authentic by promoting certain messages and suppressing others. In East Africa alone, social media platforms have removed networks designed to look legitimate but were used to spread false and biased political information.

Previous influence networks were often identified by their use of “copy-pasta,” or copying and pasting text from a central source across multiple accounts. However, the pro-Kagame network we identified employed ChatGPT to generate unique content with similar themes. This content was then posted alongside a variety of hashtags to flood online discussions.

Despite some mistakes due to the inexperience of those managing the network, we were able to trace the accounts involved. Errors in the AI-generated texts sometimes revealed the instructions used to create pro-Kagame propaganda. These messages were used to disrupt genuine discussions with unrelated or pro-government content.

In recent weeks, many of these accounts have focused on election-related hashtags, such as #ToraKagame2024 (which means “vote”). The high volume of posts created by the network makes it likely that readers will encounter content that appears to be enthusiastic support for Kagame and his government.

The use of AI in propaganda campaigns presents several challenges. AI tools allow for the rapid production of large volumes of content, which would require significantly more resources and time without such tools. They also facilitate cross-border influence through automated translation, with the Rwandan network frequently targeting the conflict in eastern Democratic Republic of Congo. Additionally, AI’s ability to create subtle variations in text complicates efforts to attribute the source of influence campaigns.

To address these challenges, citizens need to be aware of evolving digital threats. Governments, NGOs, and educators should expand digital literacy programs to help manage these threats. Improved communication between social media platforms and AI service providers is also crucial. When inauthentic activities are linked to specific actors, measures such as temporary bans or expulsions should be considered.

Finally, governments should increase the consequences for the misuse of AI tools. Without real penalties, such as restrictions on foreign aid or targeted sanctions, those engaging in these activities will continue to experiment with increasingly powerful AI technologies.

Morgan Wack is an Assistant Research Professor of Political Science at Clemson University’s Media Forensics Hub.