A recent investigation has uncovered that nearly 4,000 celebrities, including more than 250 British personalities, have become victims of deepfake pornography.
According to an analysis by Channel 4 News of the five most popular deepfake websites, a wide range of famous individuals, including female actors, TV stars, musicians, and YouTubers, have had their faces digitally manipulated and superimposed onto explicit content using artificial intelligence technology.
The investigation further revealed that these five websites collectively received a staggering 100 million views within just three months.
Cathy Newman, a presenter for Channel 4 News who was also identified as a victim of deepfake pornography, expressed her shock and distress, stating, “It feels like a violation. It just feels really sinister that someone out there who’s put this together, I can’t see them, and they can see this kind of imaginary version of me, this fake version of me.”
Despite the passing of the Online Safety Act on 31 January, which makes it illegal to share such imagery without consent in the UK, the actual creation of deepfake content remains legal. This legislation was enacted in response to the widespread proliferation of deepfake pornography produced using AI algorithms and applications.
In a concerning trend, researchers noted that while only one deepfake pornography video was identified in 2016, a staggering 143,733 new deepfake porn videos were uploaded to the 40 most frequented deepfake pornography platforms in the first three-quarters of 2023, surpassing the cumulative uploads of previous years.
Sophie Parrish, a resident of Merseyside, shared her harrowing experience of discovering fabricated nude images of herself online before the introduction of protective legislation. She remarked, “It’s just very violent, very degrading. It’s like women don’t mean anything, we’re just worthless, we’re just a piece of meat. Men can do what they like. I trusted everybody before this.”
Currently, there is ongoing consultation regarding the enforcement and application of the Online Safety Act, which has faced delays. The broadcasting watchdog Ofcom is tasked with overseeing these efforts.
An Ofcom spokesperson emphasized the gravity of illegal deepfake material, stating, “Illegal deepfake material is deeply disturbing and damaging. Under the Online Safety Act, firms will have to assess the risk of content like this circulating on their services, take steps to stop it appearing and act quickly to remove it when they become aware.”
Addressing the issue, a spokesperson from Google pledged to enhance existing protections and develop new tools to assist affected individuals, including options to remove search results featuring deepfake content and improve search rankings to combat such material more effectively.
Ryan Daniels, representing Meta, which owns Facebook and Instagram, highlighted the company’s strict policies against child exploitation and non-consensual imagery. Efforts are underway to remove ads promoting deepfake creation apps from various app stores.
The prevalence of deepfake pornography underscores the urgent need for comprehensive measures to safeguard individuals’ digital identities and combat the harmful effects of manipulated content.