Schools love a good photo, whether it’s from a trip to a castle, a science prize ceremony, or sports day shot from three angles. For two decades, celebratory images like these have gone straight onto school websites, captioned with a name and a grade. But those days are gone, because it’s the internet in 2026 and we can’t have nice things.
As first reported by the Guardian, experts are now urging schools to take those pictures down. According to the UK’s National Crime Agency, the Internet Watch Foundation, and an advisory body called the Early Warning Working Group (EWWG), blackmailers have been scraping ordinary school photos, feeding them through AI deepfake tools to manufacture child sexual abuse material (CSAM), and demanding payment to keep the images offline.
One school, 150 images
Late last year, cybercriminals contacted an unnamed UK secondary school with that demand. The IWF classified 150 of the resulting images as CSAM under UK law and generated digital fingerprints for each image so major platforms could block reuploads.
The IWF isn’t naming the school or the police force, and it doesn’t believe this was an isolated case. The EWWG says it’s “only a matter of time” before more schools face similar demands.
UK safeguarding minister Jess Phillips called it a “deeply worrying emerging threat.” In February 2025, the UK became the first country to ban AI tools designed specifically to generate CSAM.
How we got here
This threat didn’t appear overnight, and it isn’t limited to the UK. It’s an evolution of a long-time threat: sextortion, when someone uses intimate images to blackmail you. Traditionally, sextortion relied on real intimate images that were stolen or shared, but deepfake AI has changed everything.
The FBI’s Internet Crime Complaint Center (IC3) logged more than 16,000 sextortion complaints in the first half of 2021, with losses exceeding $8 million. By June 2023, the bureau warned the playbook had shifted: attackers were using ordinary social media photos to create fake explicit images and extort minors.
UK children’s counseling helpline Childline has seen similar shifts as deepfake tools become more accessible. It already logs many sextortion cases each year, many from kids who were manipulated into sharing intimate images of themselves. Now, the organization is getting calls from children who are being sent deepfake CSAM images of themselves without any prior contact.
One 15-year-old girl, for example, was sent a “really convincing” fake nude built from her Instagram photos.
By November 2025, IWF reports of AI-generated CSAM had more than doubled year over year, rising from 199 to 426. Girls accounted for 94% of the victims. Reported cases included children ranging from newborns to two-year-olds, according to the organization.
The ecosystem around these tools is industrial. In April 2025, a researcher found an exposed AWS S3 bucket belonging to South Korean “nudify” app GenNomis containing 93,485 AI-generated images alongside the prompts that produced them.
What the schools are being told
The EWWG’s advice is to replace close-up, identifiable photos with images taken from a distance, blurred images, or photos shot from behind. It also advises schools to remove full names from captions, audit existing images, and ask parents to re-sign consent forms.
In fact, it advises schools to rethink whether they need to publish children’s photos online at all.
Some schools have already acted. According to the Guardian, Loughborough Schools Foundation, a group of three private schools sharing a website, removed recognizable pupil images entirely last year.
The UK Information Commissioner’s Office (ICO) says that it “would still generally expect you to offer an opt-out to parents” when publishing an identifiable photo of a child, but says this isn’t legally the same as consent, which has a higher bar.
Things get murkier in the US, where states often have their own student privacy statutes. Broadly, though, under the Family Educational Rights and Privacy Act (FERPA), schools typically include identifiable photos of students under the category of directory information. This category also covers name, address, telephone listing, date and place of birth, participation in officially recognized activities and sports, and dates of attendance.
Under FERPA, schools can publish this type of information unless the child’s guardian specifically opts out. They have to notify a guardian when they want to publish it, but that process may not apply indefinitely after a student leaves the school.
That means student photos and information can remain online long after families assume they have disappeared.
What happens next
Back in the UK, Childline’s Report Remove service allows children to flag explicit images or videos of themselves that have been posted online. The service took 394 blackmail reports from under-18s last year, up by one-third compared to 2024.
Meanwhile, the UK government is amending the Crime and Policing Bill, forcing platforms to take flagged intimate images down within 48 hours or face fines of 10% of global revenue.
We anticipate a race between regulators and AI-enabled cybercriminals. Right now, attackers still have to manually find the photos themselves. The concern is that this process could soon become automated, allowing criminals to scrape names and photos from school websites and social media platforms at scale.
For parents, the simplest protection may be limiting how many identifiable pictures of your children are available online. That includes being vigilant not just with your child’s school, but their sports clubs, extracurricular activities, and social media accounts.