Join Our SMS List
Retirement

Deepfake Cyberbullying: An Increasing Challenge for Educational Institutions

Schools are grappling with a troubling trend: students are increasingly using artificial intelligence to create sexually explicit deepfakes of their classmates. The consequences of these manipulated images can be devastating for the victims.

This issue came to a head this fall when AI-generated nude images circulated in a Louisiana middle school. The situation escalated to the point where two boys were charged, but not before one victim faced expulsion for confronting a classmate she believed was responsible for the images.

“While the ability to alter images has existed for decades, the rise of A.I. has made it accessible to anyone, regardless of their technical skills,” stated Lafourche Parish Sheriff Craig Webre in a news release. “This incident underscores a serious concern that all parents should discuss with their children.”

Here are some key takeaways from AP’s story on the rise of AI-generated nude images and the responses from schools.

More states pass laws to address deepfakes

The legal repercussions stemming from the Louisiana incident are believed to be the first under the state’s new law, according to Republican state Sen. Patrick Connick, who authored the legislation. This law is part of a broader movement across the country, with at least half of the states enacting legislation in 2025 to combat the misuse of generative AI for creating realistic but fabricated images and sounds, as reported by the National Conference of State Legislatures. Some laws specifically target simulated child sexual abuse material.

Students have faced prosecution in states like Florida and Pennsylvania, while expulsions have occurred in places such as California. In Texas, a fifth-grade teacher was charged with using AI to create child pornography involving his students.

Deepfakes become easier to create as technology evolves

Initially, deepfakes were used to embarrass political figures and celebrities. However, recent advancements have made it possible for anyone to create realistic deepfakes with minimal technical knowledge. “Now, you can do it on an app, you can download it on social media, and you don’t need any technical expertise whatsoever,” explained Sergio Alexander, a research associate at Texas Christian University.

The scale of the problem is alarming. The National Center for Missing and Exploited Children reported a staggering increase in AI-generated child sexual abuse images, jumping from 4,700 in 2023 to 440,000 in just the first half of 2025.

Experts fear schools aren’t doing enough

Sameer Hinduja, co-director of the Cyberbullying Research Center, urges schools to update their policies regarding AI-generated deepfakes and improve their communication about the issue. “Students need to feel that educators are aware of these problems, which may deter them from acting with impunity,” he said. Many parents mistakenly believe that schools are adequately addressing the issue.

“So many parents are unaware and uninformed,” Hinduja noted. “We often see the ‘ostrich syndrome,’ where they bury their heads in the sand, hoping this isn’t happening among their youth.”

Trauma from AI deepfakes can be particularly harmful

AI deepfakes differ from traditional bullying; they often involve viral images or videos that can resurface repeatedly, creating a cycle of trauma for victims. Many suffer from depression and anxiety as a result. “They feel powerless, as if there’s no way to prove the images are fake—because they look 100% real,” Alexander explained.

Parents are encouraged to talk to students

Parents can initiate conversations by casually asking their children about funny fake videos they’ve seen online. From there, they can discuss the implications of deepfakes and whether classmates have created any. “Based on the numbers, I guarantee they’ll say they know someone,” Alexander said.

It’s crucial for kids to feel they can talk to their parents about these issues without fear of punishment. Laura Tierney, founder and CEO of The Social Institute, emphasizes the importance of open dialogue. She suggests using the acronym SHIELD as a guide for responding to deepfake encounters: “Stop” before sharing, “Huddle” with a trusted adult, “Inform” social media platforms, “Collect” evidence without downloading, “Limit” social media access, and “Direct” victims to appropriate help.

“The complexity of this issue is reflected in the six steps of the acronym,” she noted.

Copyright 2025 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Topics
K-12

Schools are grappling with a troubling trend: students are increasingly using artificial intelligence to create sexually explicit deepfakes of their classmates. The consequences of these manipulated images can be devastating for the victims.

This issue came to a head this fall when AI-generated nude images circulated in a Louisiana middle school. The situation escalated to the point where two boys were charged, but not before one victim faced expulsion for confronting a classmate she believed was responsible for the images.

“While the ability to alter images has existed for decades, the rise of A.I. has made it accessible to anyone, regardless of their technical skills,” stated Lafourche Parish Sheriff Craig Webre in a news release. “This incident underscores a serious concern that all parents should discuss with their children.”

Here are some key takeaways from AP’s story on the rise of AI-generated nude images and the responses from schools.

More states pass laws to address deepfakes

The legal repercussions stemming from the Louisiana incident are believed to be the first under the state’s new law, according to Republican state Sen. Patrick Connick, who authored the legislation. This law is part of a broader movement across the country, with at least half of the states enacting legislation in 2025 to combat the misuse of generative AI for creating realistic but fabricated images and sounds, as reported by the National Conference of State Legislatures. Some laws specifically target simulated child sexual abuse material.

Students have faced prosecution in states like Florida and Pennsylvania, while expulsions have occurred in places such as California. In Texas, a fifth-grade teacher was charged with using AI to create child pornography involving his students.

Deepfakes become easier to create as technology evolves

Initially, deepfakes were used to embarrass political figures and celebrities. However, recent advancements have made it possible for anyone to create realistic deepfakes with minimal technical knowledge. “Now, you can do it on an app, you can download it on social media, and you don’t need any technical expertise whatsoever,” explained Sergio Alexander, a research associate at Texas Christian University.

The scale of the problem is alarming. The National Center for Missing and Exploited Children reported a staggering increase in AI-generated child sexual abuse images, jumping from 4,700 in 2023 to 440,000 in just the first half of 2025.

Experts fear schools aren’t doing enough

Sameer Hinduja, co-director of the Cyberbullying Research Center, urges schools to update their policies regarding AI-generated deepfakes and improve their communication about the issue. “Students need to feel that educators are aware of these problems, which may deter them from acting with impunity,” he said. Many parents mistakenly believe that schools are adequately addressing the issue.

“So many parents are unaware and uninformed,” Hinduja noted. “We often see the ‘ostrich syndrome,’ where they bury their heads in the sand, hoping this isn’t happening among their youth.”

Trauma from AI deepfakes can be particularly harmful

AI deepfakes differ from traditional bullying; they often involve viral images or videos that can resurface repeatedly, creating a cycle of trauma for victims. Many suffer from depression and anxiety as a result. “They feel powerless, as if there’s no way to prove the images are fake—because they look 100% real,” Alexander explained.

Parents are encouraged to talk to students

Parents can initiate conversations by casually asking their children about funny fake videos they’ve seen online. From there, they can discuss the implications of deepfakes and whether classmates have created any. “Based on the numbers, I guarantee they’ll say they know someone,” Alexander said.

It’s crucial for kids to feel they can talk to their parents about these issues without fear of punishment. Laura Tierney, founder and CEO of The Social Institute, emphasizes the importance of open dialogue. She suggests using the acronym SHIELD as a guide for responding to deepfake encounters: “Stop” before sharing, “Huddle” with a trusted adult, “Inform” social media platforms, “Collect” evidence without downloading, “Limit” social media access, and “Direct” victims to appropriate help.

“The complexity of this issue is reflected in the six steps of the acronym,” she noted.

Copyright 2025 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Topics
K-12