Quantcast
Make your Fall Appeal gift now!

The Deepfake Deluge is Coming, and Kids Are a Major Target

Highlights

  1. Generative AI allows for endless permutations of deepfake pornographic content. Post This
  2. There is one sure, albeit difficult, line of defense available to parents and children: don't share children's photos on social media. Post This
  3. Creating fake nude photos has become remarkably easy. More challenging (and potentially more damaging) is the creation of pornographic deepfake videos. Post This

Victims are usually alerted by a flood of text messages. On October 2, 2023, 14-year-old Elliston Berry woke up to a torrent of messages from friends saying that nude images of her were being circulated among classmates. But the teen from Aledo, Texas, hadn’t taken the pictures. No one had. Over the course of a day, explicit images of nine other Aledo High School girls were circulating, all of which were doctored.

In the case of Berry, the offender was a classmate. But perpetrators of this growing crime are often complete strangers. The stakes, in these situations, can get even higher.

Kentucky teenager Elijah Heacock was sent deepfake nude photos of himself from a stranger online with a demand of $3,000. Heacock scrambled for money, begging friends for loans to meet the demand. Only able to muster $50, Heacock took his own life at the age of 16, unable to cope with the fear that these doctored photos would be released to his family and peers.

Berry and Heacock are just two children among a growing list of deepfake victims. The problem of deepfakes is only getting worse.

Doctored nude photographs have existed since at least the 1990s, made possible by photo editing software like Adobe Photoshop. Up through the early 2010s, fakes were largely confined to celebrities. Producing doctored photos required a high degree of technical proficiency, a serious time investment, and a large consumer audience. 

But this changed in 2017. That year, a Reddit user named “deepfakes” released a doctored pornographic video of actress Gal Gadot. This feat was accomplished by superimposing Gadot’s face onto a genuine pornographic video with the aid of machine learning, marking a new degree of technical sophistication along with a more potent form of media: the pornographic video.

Deepfake pornography will become more prevalent as it becomes easier to make.

This Reddit user continued to produce doctored celebrity videos, often on request, using the likes of Emma Watson, Taylor Swift, and Maise Williams among other female celebrities. In each case, a photo of the victim’s face was seamlessly superimposed onto the body of a porn actress. Later that year, the eponymous Reddit subforum r/deepfakes was launched, and the user released the face swapping tool for public use: a simple python script using open-source machine learning packages.

A cottage industry of deepfake production has since boomed. Websites like FakeApp and DeepNude launched in the late 2010s, allowing users to digitally “undress” any woman in just a few clicks. Online tools such as these have been largely confined to photographs. Up until recently, videos have been out of reach. That will change very soon.  

Creating fake nude photos has become remarkably easy. More challenging (and potentially more damaging) is the creation of pornographic deepfake videos. A plethora of online communities, often hosting their projects on GitHub, have been laboring intensely to make homemade deepfakes more accessible. These efforts have been aided by the release of powerful, opensource AI models like Alibaba’s Wan2.2 animate. Step-by-step guides are now available on YouTube, and a number of public forums provide tips and debugging assistance. 

The recipe is the same in every case: take a basic photo of the target, typically scraped from social media, and use machine learning to superimpose the victim’s face onto a pornographic video. Current AI models, many of which are trained on pornography, can produce novel pornographic videos. There is consequently no limit to the scale, realism, or strangeness to the deepfakes that can be produced—a far cry even from the original deepfakes of 2017. 

Generative AI allows for endless permutations of deepfake pornographic content. In 2024, Joanna Chew, an artist and actress with a small social media following, noticed that one fanatical devotee had created and posted thousands of deepfake images and videos of her, many of which were pornographic. Female twitch streamers have been particularly targeted in this manner.

Acquiring deepfakes is getting easier, even for the less technically savvy. Websites like Civitai, backed by a16z, allow users to place “bounties” for the creation of AI-generated videos, frequently pornographic videos of real women, including persons without serious public presence, according to 404 Media. Online marketplaces for deepfake pornography, which can target anyone with just a photo, are a click away.

Current AI models, many of which are trained on pornography, can produce novel pornographic videos. There is consequently no limit to the scale, realism, or strangeness to the deepfakes that can be produced.

Bipartisan legislative work is being done in response to the proliferation of deepfakes at the state and federal level. We need more of it. So-called “nudify” apps appear on-and-off Apple and Google app stores, often disguised as more benign face-swapping apps. The App Store Accountability Act, already passed in three states and currently in the U.S. Senate, would make it more difficult for children to be victimized by classmates in this way.

But the genie is out of the bottle. Technologies that create deepfakes are diffuse. Chatrooms, many outside the reach of legislators, are an engine by which ever improving deepfake models are conceived and disseminated. Sextortion cases of minors have risen dramatically in recent years, often perpetrated by international sextortion rings. The FBI reports that sextortion cases rose by 59%, up to 54,000 cases, from 2023 to 2024. Teenage boys like Elijah Heacock are the predominant targets, many of whom take their own lives out of desperation.

There is one sure, albeit difficult, line of defense available to parents and children. In potentially every case of deepfake pornography, photos of the victim are scraped from social media or exchanged digitally peer-to-peer, such as through Snapchat or Discord. Without these photos, deepfakes cannot be manufactured. If a child does not have photos online and is not able to send photos, they are much safer from deepfakes. In this vein, eliminating photo release waivers for schools and youth organizations is necessary to protect children from deepfake pornography. For children and teens with photos already on social media, simply deleting everything possible can go a long way.

Deepfake pornography will become more prevalent as it becomes easier to make. New AI models are lowering the barrier of entry and putting this capability into more hands. In response, legislators and parents must make it more difficult for these weapons to target America’s children.

Grant Bailey is a Research Fellow and IFS Insights Editor at the Institute for Family Studies. 

Editor's Note: This article discusses suicide and self-harm. If you or someone you love is struggling, you can dial or text 988 for the Suicide & Crisis Lifeline.

*Photo credit: Shutterstock

Dear Reader,
Endless scrolling. Pornographic content. Human-like chatbots. Big Tech is tearing families apart.
With your support, IFS will take the fight to Big Tech on behalf of American families.
We’ve already inspired nearly 30 laws to protect kids online, and we’re just getting started.
Will you join us?
Donate Now
Never Miss an Article
Subscribe now
Never Miss an Article
Subscribe now
Sign up for our mailing list to receive ongoing updates from IFS.
Join The IFS Mailing List

Contact

Interested in learning more about the work of the Institute for Family Studies? Please feel free to contact us by using your preferred method detailed below.
 

Mailing Address:

P.O. Box 1502
Charlottesville, VA 22902

(434) 260-1048

info@ifstudies.org

Media Inquiries

For media inquiries, contact Chris Bullivant (chris@ifstudies.org).

We encourage members of the media interested in learning more about the people and projects behind the work of the Institute for Family Studies to get started by perusing our "Media Kit" materials.

Media Kit

Make your Fall Appeal gift now!
Let’s revive the American family together.
Donate Now

Wait, Don't Leave!

Before you go, consider subscribing to our weekly emails so we can keep you updated with latest insights, articles, and reports.

Before you go, consider subscribing to IFS so we can keep you updated with news, articles, and reports.

Thank You!

We’ll keep you up to date with the latest from our research and articles.