- Deepfakes use AI to create fake porn of real people, harming millions.
- Emerging tools can detect deepfakes and add friction against their creation.
- A combination of tech, laws, and platform policies aims to curb this AI misuse.
Deepfakes, AI-generated synthetic media impersonating real people, enable alarming rates of fake pornographic content.
One study found that deepfake porn increased by over 130% in 2022 alone, harming millions. The trauma for victims is real, with several reported cases linking deepfakes to suicide.
Can technology counter deepfakes?
Thankfully, tools are emerging to counter this AI misuse. Digital watermarks embedded in AI content can help label it as synthetic. Services like Sensity also detect deepfake fingerprints, alerting unwitting viewers.
More proactive “poison pills” corrupt images when fed to AI, preventing the creation of deepfakes in the first place.
Solutions against AI misuse
Laws are evolving, too. 10 US states now specifically prohibit deepfakes. New federal bills would let victims sue creators. Other countries criminalize distributing deepfakes. While hard to police, laws are an important deterrent.
No solution will fully eradicate deepfakes. But combining protective tech, identity watermarks, legal deterrents, and pressuring platforms can introduce friction against the malicious use of AI. The goal is to prevent as much damage as possible from this growing threat.