Meta's Findings on AI-Generated Election Misinformation
At the beginning of the year, concerns were widespread about the potential use of generative AI to disrupt elections globally by spreading propaganda and misinformation. However, by the end of the year, Meta reported that these fears did not materialize, at least not on its platforms. The company indicated that the impact of this technology across its apps—Facebook, Instagram, and Threads—was minimal.
Meta's analysis focused on major elections held in various countries, including the U.S., Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the U.K., South Africa, Mexico, and Brazil.
In a blog post, Meta stated, "While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content." They found that during the elections in the listed countries, AI-generated content related to elections, politics, and social issues accounted for less than 1% of all misinformation that had been fact-checked.
To combat potential misuse of its Imagine AI image generator, Meta reported that it rejected over 590,000 requests to create images of notable political figures, including President-elect Trump and President Biden, in the month leading up to election day. This measure aimed to prevent the creation of misleading deepfakes.
Moreover, the company discovered that networks of accounts attempting to spread propaganda did not significantly benefit in terms of productivity or content generation through the use of generative AI.
Meta emphasized that the use of AI did not hinder their efforts to dismantle covert influence campaigns. Their focus remains on the behavior of these accounts rather than the specific content they create, regardless of whether it is produced by AI.
In their efforts to curb foreign interference, Meta took down approximately 20 new covert influence operations worldwide. They noted that most of these networks lacked genuine audiences and that many employed fake likes and followers to create a facade of popularity.
Additionally, Meta criticized other platforms, highlighting that false videos tied to Russian influence operations regarding the U.S. election were frequently shared on services like X and Telegram.
As the year comes to a close, Meta stated, "As we reflect on what we’ve learned during this remarkable year, we will continue to review our policies and announce any changes in the forthcoming months."
Meta, AI, Misinformation