When It’s Not You, But Your Face: Why the UN Cybercrime Convention Fails Victims of Morphed Sexual Images
- Ishita Sharma
- Sep 3
- 6 min read
Authored by: Ishita Sharma, 4th Year (BBA LLB), Jindal Global Law School, Sonepat.
Introduction
Recently, the United Nations Convention against Cybercrime was adopted by the General Assembly in December 2024, and it is set to open for signature on 25 October 2025 in Hanoi, Viet Nam, a landmark event that invites urgent reflection on whether its current text adequately protects individuals against AI-driven harms.
Advances in artificial intelligence have made it easy to create deepfake images that place a person’s face onto an explicit body without their consent. These morphed images can spread quickly online and cause serious reputational and psychological harm, even though no real intimate photo ever existed of the person. This piece argues that the UN framework must be expanded, by either amending present sections or adding a new offence to cover non-consensual synthetic sexual imagery, ensuring victims of these deepfake harms can seek justice.
Treaty Analysis – Coverage and Limits
Under its current text, ‘computer-related forgery’ is limited to data falsification ‘for legal purposes,’ and non-consensual intimate-image offences seem to cover only genuine private recordings shared without consent. As a result, people whose likenesses are digitally fabricated into sexual content have no clear protection under the Convention.
Computer-Related Forgery (Article 11)
The UN Convention against Cybercrime requires States to criminalise computer-related forgery, defined as: “the input, alteration, deletion or suppression of computer data, resulting in inauthentic data with the intent that it be considered or acted upon as if it were authentic, for legal purposes”.
The key elements seem to be (1) Inauthentic data – the data must not be genuine; (2) Intentional deception – the actor intends others to treat it as authentic; and (3) Legal purposes – the deception must try to influence a legal or evidentiary process.
Because the provision is tied to legal or evidentiary fraud, it does not fully capture deepfake images created to shame, humiliate, or defame as they are not intended to mislead a court or government agency for any legal purposes.
Non-Consensual Intimate Images (Article 16)
Article 16 addresses the non-consensual dissemination of ‘intimate images’: “Each State Party shall adopt … criminal offences when committed intentionally and without right, the selling, distributing, transmitting, publishing or otherwise making available of an intimate image of a person … which was private at the time of the recording, without the consent of the person depicted in the image”.
This definition seems to require (1) a genuine recording that was private; (2) sexual content – the image must show sexual parts or activity; and (3) Expectation of privacy – the subject must reasonably expect the image to remain private.
Morphed sexual images do not arise from an original private recording. They are synthesised fully, so they fail being “private at the time of recording” and thus fall outside the Convention’s protection.
Sexual Privacy as a Distinct Harm
Deep-fake sex videos inflict deep psychological and social harms by stripping individuals of agency over their sexual identity, inducing anxiety, shame, and a pervasive fear of exposure that can lead to depression or suicidal ideation. Victims report ‘perpetual fear’ and intrusive trauma reminiscent of physical violation, while the viral dissemination of morphed imagery such as the fake pornographic video of journalist Rana Ayyub amplifies reputational damage and silencing through coordinated political shaming, economic consequences follow too as employers increasingly screen online profiles, with up to 80% refusing to hire candidates based on ‘unsuitable’ search results, disproportionately stigmatizing women and minorities under pervasive gendered and racial stereotypes.
Comparative National Responses
In New Zealand, a Member of Parliament highlighted this problem by showing a deepfake nude image of herself during a debate, making it clear how easily AI can create fake intimate content and urged that laws be updated to cover these abuses. Across other democracies, responses run the gamut, this is exemplified by looking at legislations of the UK, Australia, and India because they showcase different models of legal response to deepfake sexual imagery: the UK and Australia for their pioneering, source-agnostic approach and India for its reliance on pre-AI obscenity and privacy laws that reveal how traditional statutes fail to address wholly synthetic deepfakes.
United Kingdom
The UK’s response to deep-fake pornography is now based on a suite of legislative reforms rather than existing statutes alone. In January 2025, Parliament tabled an amendment to the Data (Use and Access) Bill creating a standalone offence of intentionally producing a “purported sexual image”, defined to include any artificial or AI-generated depiction of a person naked or engaging in sexual activity, without consent and with intent to cause alarm, distress, humiliation, or for sexual gratification; conviction carries an unlimited fine . This amendment builds directly on the Online Safety Act 2023, under which sharing or threatening to share intimate images (including deepfakes) was designated a “priority offence”. Further, the Government has committed in its Crime and Policing Bill to introduce new offences criminalising the taking of intimate images without consent and the installation of recording equipment for that purpose. Together, these measures constitute a layered legal framework explicitly targeting both the creation and dissemination of non-consensual synthetic sexual imagery. UK’s model offers a victim-centred, forward looking approach that eliminates the need to prove falsity or existence of original content, hence addressing the harm caused by synthetic sexual imagery directly, unlike India’s reliance on outdated privacy and obscenity statutes that hinge on real recordings, thus leaving victims of AI-generated abuse without any real legal recourse.
Australia
Australia’s federal response is anchored in the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, which amends the Criminal Code Act 1995 to create a technology-neutral, consent-based offence for transmitting sexual material of an adult via any carriage service without consent or with reckless disregard for consent, applying ‘regardless of whether the material is unaltered or has been created or altered in any way using technology’. These amendments repeal earlier ‘private sexual material’ definitions and streamline the test for offensiveness, while ensuring that liability attaches regardless of when or how the image was produced. The offence relies on the Commonwealth’s carriage-service power (s 51(v)), and is complemented by the eSafety Commissioner’s removal and civil-penalty powers under the Online Safety Act 2021, together forming a strong framework against non-consensual synthetic sexual imagery. Hence, Australia’s technology-neutral, consent-based framework future-proofs the law by focusing on the harm to individual autonomy rather than the mode of image creation, thereby allowing the law to evolve with generative technologies, simultaneously ensuring that novel forms of synthetic abuse, regardless of format or platform, remain within the legal scope.
India
Under India’s Information Technology Act 2000, it is an offence under Section 67 to publish or transmit any electronic material that ‘is lascivious or appeals to the prurient interest,’ and under Section 67A to do so if it contains ‘sexually explicit act or conduct.’ Section 66E further criminalises the violation of privacy by capturing, publishing, or transmitting a private image without consent. Each carries fines and potential imprisonment but presupposes an underlying recording. The newer Bharatiya Nyaya Sanhita 2023 introduces broad forgery and defamation offences, yet none employ a consent-based, ‘source-agnostic’ approach to cover AI-generated or wholly synthetic deepfake pornography. (‘Source-agnostic’ in this context means that the offence does not depend on the existence of any underlying real photograph or recording.) As a result, non-consensual deepfake sexual imagery currently falls outside India’s cybercrime offences, leaving victims without a clear statutory remedy. India’s reliance on traditional privacy and obscenity laws reflects a deeper institutional and cultural discomfort with legislating around sexual autonomy and consent. This hesitation to confront the realities of digital sexual violence reflects enduring taboos around sex, agency, and bodily integrity, resulting in frameworks that sidestep the unique harms of synthetic sexual content.
Conclusion
While some jurisdictions such as the UK and Australia have enacted source-agnostic, consent-based offences expressly covering non-consensual deepfake pornography, and states like California have extended ‘revenge porn’ laws to AI-generated imagery, many jurisdictions lack provisions for wholly synthetic sexual content. The UN Cybercrime Convention’s narrow forgery and intimate-image articles similarly fail to capture harms arising from non-consensual synthetic sexual imagery. To achieve truly global protection, the Convention must be amended.
Possible options are to broaden Article 11 by replacing its ‘for legal purposes’ limitation with language criminalising any intentional creation of inauthentic data meant to mislead, defraud, degrade, humiliate, defame, or violate an individual’s dignity or autonomy, thereby capturing deepfake imagery designed to harm. Simultaneously, Article 16 could be amended to explicitly include ‘any synthetic image generated wholly or in part by digital, algorithmic or artificial-intelligence means’ in its definition of ‘intimate image’ and to add a new paragraph confirming that any depiction of a person’s likeness in a sexual context, real or synthetic, created without consent is punishable, regardless of the existence of an original private recording. Otherwise, a standalone section could also be added to separately criminalise the act.







Comments