Sydney High School Student Under Investigation for Alleged Creation of Deepfake Pornography
A Year 12 male student from a public high school in Sydney's southwest is under investigation for allegedly using artificial intelligence to produce and disseminate explicit deepfake images of female classmates. The student purportedly utilized innocent photos from social media and school events to generate pornographic content, which was then circulated via fabricated social media profiles.
[Read More: U.S. Teen Accused of Using AI to Create and Share Fake Nude Images of High School Girls]
The Allegations
Reports suggest that the student used advanced AI software to generate hyper-realistic explicit content by altering photos of classmates. These images were subsequently circulated online through fabricated accounts, raising significant privacy and safety concerns.
[Read More: Deepfake Dilemma: How AI-Generated Abuse Is Challenging Society's Norms]
Investigation Underway
NSW Police initiated their inquiry on January 6, 2025, with officers from Campbelltown City Police Area Command leading the case. The investigation involves collaboration with the eSafety Commissioner and the New South Wales Department of Education. According to a police statement, authorities are working diligently to address the issue and ensure justice for the victims.
[Read More: Innocence Unprotected? The Unseen Cost of AI on Children's Privacy]
School and Department Response
The New South Wales Department of Education has condemned the alleged actions, emphasizing a zero-tolerance policy toward such behaviour.
“Our highest priority is to ensure our students feel safe”, a spokesperson stated.
Support services have been made available to the affected students, and decisions regarding the accused student’s future involvement in the school will be made accordingly.
Education Minister Prue Car labeled the incident as “abhorrent” and commended the school’s leadership for their prompt response.
“There will be disciplinary action for the student … Our priority is making sure that all the affected students are OK and that they are OK to return [to school] on day one, term one”, she said.
[Read More: Meta's Bold Move: Facebook and Instagram to Label AI-Generated Content]
Broader Concerns About Deepfake Technology
This incident highlights growing concerns about the misuse of deepfake technology, which enables the creation of hyper-realistic, fabricated content. Popular apps like CapCut, Roop, and AISaver offer user-friendly tools for face swapping in videos, often with minimal technical knowledge required. While these tools serve creative and entertainment purposes, their misuse has raised ethical and legal questions worldwide.
Experts warn that the increasing accessibility of such technology poses significant challenges for both legal systems and societal norms. eSafety Commissioner Julie Inman Grant noted that deepfakes represent a severe invasion of privacy and can be devastating to individuals whose images are manipulated without consent.
Legal Framework and Protections in Australia
Australia has enacted several laws to combat non-consensual image-based abuse, including deepfakes:
Criminal Code Amendment (Deepfake Sexual Material) Act 2024: In August 2024, the Australian Parliament passed the Criminal Code Amendment (Deepfake Sexual Material) Act 2024, introducing new criminal offenses targeting the non-consensual sharing of sexually explicit material, including content generated or altered using technologies like deepfakes. Under this legislation, individuals found guilty of sharing such material without consent face penalties of up to six years' imprisonment. If the offender also created the deepfake material, the penalty increases to up to seven years' imprisonment.
Online Safety Act 2021: The Online Safety Act 2021 established a civil penalties scheme administered by the eSafety Commissioner. This framework allows individuals to report the non-consensual sharing of intimate images, empowering the eSafety Commissioner to issue removal notices and enforce civil penalties against perpetrators. The Act aims to provide swift remedies to victims of image-based abuse.
Privacy Act 1988: The Privacy Act 1988 considers biometric information, including facial images, as 'sensitive information'. The collection and use of such data are subject to strict regulations, requiring organizations to obtain consent before collecting or using an individual's facial data. Recent rulings have reinforced the necessity for explicit consent, especially concerning the use of facial recognition technology in public spaces.
State and Territory Legislation
In addition to federal laws, various Australian states and territories have enacted legislation addressing the non-consensual distribution of intimate images. For instance, Victoria has specific offenses criminalizing both the production and distribution of deepfake material. Penalties vary across jurisdictions but generally include substantial fines and potential imprisonment.
These legal frameworks collectively aim to protect individuals' privacy and combat the misuse of technologies that can infringe upon personal rights, ensuring that unauthorized use of facial images is addressed with appropriate legal consequences.
Support for Affected Students
The Department of Education is providing ongoing support to the affected students to ensure their well-being and safety. Parents and guardians have been notified, and counselling services are available to assist those impacted by this distressing incident.
[Read More: TikTok Ban Advances in the U.S. as AI Innovation Flourishes in Europe]
License This Article
Source: Daily Telegraph, News.com.au, Easy With AI, CapCut, SBS, Sensor Tower, Attorney-General, Image-Based Abuse Project, Australian Privacy Foundation, ParlInfo