Deepfake Scams Misuse News Anchor Images on Social Media
Thousands of social media users were recently confronted with videos showing CNN's Wolf Blitzer and CBS's Gayle King promoting various products — a diabetes drug and weight loss products, respectively. However, both instances involved fake videos, commonly referred to as 'deepfakes,' which maliciously utilized the identities of these trusted news figures to push bogus products, eroding the credibility of reputable media sources.
The Emergence of Deepfake Technology
Deepfake technology represents a form of artificial intelligence that generates realistic video or audio recordings of people saying or doing things they did not actually say or do. This alarming trend has seen various established personalities from networks like Fox News, CBC, and BBC being digitally replicated in unauthorized advertising campaigns. These deepfakes have been used to tout unproven health treatments, investment scams promising guaranteed returns, and even manipulated footage of high-profile figures like Elon Musk.
Countermeasures by Victims
Victimized journalists have taken to their social media accounts to warn the public. Gayle King, after being alerted to the deceptive use of her image, clarified via Instagram that she had no association with the products being advertised. Similarly, CNN's medical correspondent, Sanjay Gupta, expressed concern for people's health, cautioning them against using the endorsed products as they were unrelated to him.
The Platforms' Response to Deepfakes
Meta, the organization behind Facebook and Instagram, has instituted a ban on deepfakes since early 2020, barring exceptions such as satire. Other social media platforms have also adopted similar stances. Despite these policies, the propagation of fake content persists, challenging users to differentiate between authentic and fraudulent information.
Quality and Detection of Deepfakes
While some deepfakes are rudimentary and can be spotted with relative ease, others are becoming increasingly sophisticated, making them harder to recognize. Experts caution that the technologies used to create deepfakes are advancing rapidly. Recognizable TV figures are particularly susceptible to such scams due to the abundance of video material available to train AI models on their likeness.
The Implications of AI-generated Misinformation
The rise of AI-manipulated content contributes to a 'crisis of trust' in media and institutions, with significant societal implications such as increasing skepticism and cynicism among the population. People already wary of AI's influence fear its potential impact on politics, concerned that deepfakes could sway election outcomes. End-users are therefore advised to maintain a healthy dose of skepticism and take care in disseminating content online.
The issue at the core is the misuse of AI to fabricate convincing falsehoods, fundamentally related to the field of artificial intelligence because it involves machine learning techniques to impersonate individuals, an aspect of AI designed for creating highly realistic synthetic media.
deepfake, scams, media