The proliferation of artificial intelligence (AI) tools has made deepfake pornography a pervasive issue, affecting individuals from various backgrounds, including celebrities and everyday people. Carrie Goldberg, a lawyer specializing in online harassment, emphasizes that “All we have to have is just a human form to be a victim,” highlighting the alarming reality that anyone can be targeted by these technologies, even if they have never shared intimate images themselves.
Goldberg’s comments were made during an interview on CNN’s tech podcast, “Terms of Service,” where she discussed the implications of AI-generated sexual imagery. She noted that these tools can easily graft a person’s face onto another’s body or alter existing pictures to create the illusion of nudity, leading to devastating consequences for victims. The emotional toll on those affected is significant; discovering that one has been depicted in deepfake porn can be a “terrifying and overwhelming ordeal,” particularly for young individuals.
In recent months, high-profile cases have emerged, including instances involving well-known figures such as Taylor Swift and Representative Alexandria Ocasio-Cortez. However, the issue extends beyond celebrities to include teenage girls in high schools who are also vulnerable to this form of harassment[4].
To combat this growing threat, Goldberg advises victims to take proactive steps. She suggests that individuals targeted by AI-generated sexual imagery should first take screenshots of the content, despite the instinct to remove it immediately. Organizations such as StopNCII.org and Take It Down are also available to assist in facilitating the removal of such content across various platforms.
Legislation is beginning to catch up with the technology. In August, a bipartisan group of senators urged tech companies, including X and Discord, to address nonconsensual images and deepfakes more effectively. Senator Ted Cruz introduced a bill aimed at criminalizing the publication of such images and mandating social media platforms to remove them upon victims’ requests.
Despite these efforts, many states still lack comprehensive laws addressing the creation or dissemination of explicit deepfakes involving adults. Currently, AI-generated sexual images of minors are typically classified under child sexual abuse material laws.
Goldberg calls for potential offenders to consider their actions seriously. She stresses that while one can never be completely safe in a digital world, there are steps individuals can take to protect themselves from becoming victims.
For further insights and resources on this topic, listeners can access the full conversation with Carrie Goldberg on CNN’s “Terms of Service” podcast.
Sources:
[1] https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071
[2] https://theconversation.com/deepfake-porn-why-we-need-to-make-it-a-crime-to-create-it-not-just-share-it-227177
[3] https://theconversation.com/what-to-do-if-you-or-someone-you-know-is-targeted-with-deepfake-porn-or-ai-nudes-232175
[4] https://www.cnn.com/2024/11/12/tech/ai-deepfake-porn-advice-terms-of-service-wellness/index.html
[5] https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf
[6] https://www.washingtonpost.com/technology/2023/02/13/ai-porn-deepfakes-women-consent/
[7] https://amp.cnn.com/cnn/2024/11/12/tech/ai-deepfake-porn-advice-terms-of-service-wellness
[8] https://swgfl.org.uk/topics/synthetic-media-deepfake/support-and-advice-for-adults/