Defamation by Deepfake: How the UK's Laws are Struggling to Protect Reputation in the Age of AI
Introduction
Deepfakes are AI-generated synthetic videos that mimic the appearance of real individuals. As the usage of AI becomes more common, deepfakes are no longer a technological novelty. Deepfakes have been deployed to harm reputations, from fabricated videos of politicians calling for surrender to fake explicit images of renowned celebrities [1] [2].
Their persuasive realism and ability to spread online rapidly raise important questions for English defamation law. Although the Defamation Act 2013 protects against false statements that cause ‘serious harm’, deepfakes sit uneasily within this framework.
This article will analyse how the UK’s current laws regulating defamatory content areinsufficient to combat media generated by deepfake technology, identify the gaps in the lawthrough reviewing the current legislation in place, and argue that precedents will put potential victims in disadvantaged positions where thresholds to claim compensation are high. Further, it also analyses suggestions for reform.
Why Deepfakes Pose a Unique Defamation Problem
Deepfakes present harms qualitatively different from ordinary defamatory publication. Deepfake technology can be used to create non-consensual explicit content. Significantly, in the UK, there are laws regulating the sharing of photographs or films of another person in ‘an intimate state’ without their consent [3]. However, most of the UK’s current laws are based on privacy concerns; only real intimate images fall under the statutory umbrella. As Ellis argued in her article, it would be hard to bring a claim against a party for exposing the intimate details of an individual’s life when technically it is not their life they are exposing.[4] For this reason, victims are unlikely to be able to rely on privacy laws.
With the use of AI becoming more prominent, it has become progressively harder to tell apart authentic videos and audio from content created via deepfakes. Their realism gives them an evidential power that is incomparable to a mere text-based claim, since viewers are more likely to believe a video than a written allegation. As Chesney and Citron wrote, deepfake technology is likely to diffuse at a rapid pace, easily accessible to people with malicious intent. They argued that the growth of social media platforms facilitates global distribution of content and democratises access to communication to an unprecedented degree [5]. As the development of deepfake technology improves, differentiating real and synthetic media will only become more challenging, placing everyday individuals in increasingly vulnerable spots.
Defamation Laws and their Gaps
Amidst these difficulties, defamation laws are the most feasible route for claimingcompensation against non-consensual deepfakes. The UK’s defamation laws are regulated by the Defamation Act 2013. Section 1 of the Defamation Act states that for a statement to be considered, the claimant must prove that the publication has caused or is likely to cause ‘serious harm’ to their reputation [7]. Tugendhat J said in Thornton v Telegraph Media Groupthat where the claimant had suffered no or negligible damage to their reputation, there would be an interference with the defendants’ Art 10 ECHR right to freedom of expression to impose liability for defamation [8].
In Lachaux v Independent Print Ltd, it was further clarified that liability could only be established by reference to the impact which the statement is shown actually to have had, to be determined by the inherent tendency of the words and their actual impact on those to whom they were communicated [9]. While the courts must consider balancing the rights to freedom of expression with the right to respect for private and family life, there are also difficulties in proving ‘serious harm’ as the court will consider the actual substance of the content and the impacts it has on its audience, making it hard for potential victims to claim compensation.
Given that deepfake videos do not make an assertion in linguistic terms, it would be difficult to bring a claim, as the Act is traditionally focused on statements. While English law has interpreted ‘statements’ broadly and has previously accepted that pictures can be defamatory, deepfakes stretch this principle further. Even where defamatory meaning is clear, claimants face structural hurdles in identifying a publisher. Deepfakes are often created anonymously using open-source tools, leaving no realistic defendant. This raises an attribution problem; it is almost impossible to trace the person who committed the offence.
Platforms, meanwhile, fall outside the scope of primary liability unless they assume editorial control. This threshold was reaffirmed in Tamiz v Google, where it was held that Google, as the host of blogging platforms, could be deemed a publisher only if it had been notified of the offensive material and could have removed it [10]. To bring a claim against a platform, individuals would have to prove that the platform had control over the content and that the damage to reputation was more than ‘trivial’, making the threshold uneasy to meet.
Suggestions for Reform
The law must recognise that deepfakes are not merely new forms of publication but a qualitatively distinct phenomenon. One solution is the creation of a tort of “synthetic media harm”, focused on the misuse of identity rather than proving a defamatory statement. This tort would avoid the limitations of defamation doctrine by centering on the unauthorisedreplication or manipulation of a person’s likeness. Introducing a new tort would allow for legal certainty, clarify fault standards, potential defences and remedies, and provide victims with a more predictable legal route.
Alternatively, mandating watermarking for AI-generated content would assist courts in assessing authenticity. The EU has been actively exploring AI watermarking by embedding specific signals or identifiers in generated content, which can then be detected by specialized algorithms [11]. Incorporating similar obligations into UK law would reduce ambiguity about authenticity, assist in evidential assessments, and deter malicious creation by increasing the risk of traceability. Furthermore, the development of a system to track the original creators of such content would ease the attribution issue, so users would be more responsible and aware of the potential consequences that creating synthetic content could yield.
Conclusion
Deepfake technology exposes weaknesses in the Defamation Act, particularly concerning authenticity and responsibility. Privacy doctrines cannot assist because no actualprivate information is disclosed, while defamation law struggles with synthetic statements and the ‘serious harm’ threshold. As deepfake technology becomes increasingly accessible, the law must develop mechanisms capable of addressing the dangers of synthetic media. Without reform, individuals will remain vulnerable in an online environment where fabricated videos can destroy reputations within minutes.
References
[1] Bobby Allyn, ‘Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn’ (NPR, 16 March 2022) < https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia> accessed 16 Nov 2025
[2] Emine Saner, ‘Inside the Taylor Swift deepfake scandal: ‘It’s men telling a powerful woman to get back in her box’’ (The Guardian, 31 Jan 2024) < https://www.theguardian.com/technology/2024/jan/31/inside-the-taylor-swift-deepfake-scandal-its-men-telling-a-powerful-woman-to-get-back-in-her-box> accessed 16 Nov 2025
[3] Sexual Offences Act 2003, s 66B
[4] Emma Grey Ellis, ‘People Can Put Your Face on Porn—and the Law Can't Help You’ (Wired, 26 Jan 2018) <https://www.wired.com/story/face-swap-porn-legal-limbo/> accessed16 Nov 2025
[5] Robert Chesney and Danielle K Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’, (2019) 107 California Law Review 1753 <https://scholarship.law.bu.edu/faculty_scholarship/640> accessed 16 Nov 2025
[6] Defamation Act 2013, s 1
[7] Thornton v Telegraph Media Group [2010] EWHC 1414 (QB)
[8] Lachaux v Independent Print Ltd [2019] UKSC 27
[9] Tamiz v Google [2013] EWCA Civ 68
[10] European Parliament, Generative AI and watermarking (PE 757.583 –December 2023)
Image Credits: dlxmedia.hu on Unsplash, <https://unsplash.com/photos/a-person-holding-a-device-1hY4ktaF1nE>

