Abstract
The year 2024 saw the direct impact of artificial intelligence on personal identity become a reality, as highlighted in the Annual Report published on July 15, 2025, by the Italian Data Protection Authority. It was the year in which the GDPR acted as an “ethical-legal brake”, anticipating the entry into force of the European AI Act and prioritizing the fight against the unlawful use of biometric data by generative AI systems.
The image in the age of digital manipulation
At the heart of the debate are deepfakes – artificially generated visual and audio content capable of altering the perception of reality (for more on the topic, see: From Plagiarism to the Risk of Public Manipulation: “Deepfakes” – Canella Camaiora Law Firm). Legal scholar Şeymanur Yönt, in her Discussion Paper The Deepfake Menace, states clearly: “Seeing is no longer believing!” — we can no longer trust what we see with our own eyes.
Yönt also highlights the potential benefits of these technologies:
- Restoring the voice to people who have lost it due to neurodegenerative diseases;
- Revitalizing historical figures or public personalities in educational and museum settings;
- Providing creative tools for filmmakers, artists, and musicians.
However, enthusiasm must be tempered. As Wired points out, in the case of the film “Fast & Furious 7”, the digital reconstruction of Paul Walker after his death — which occurred during filming — raised profound questions: the image becomes a contractual commodity, and the industry progressively frees itself from the constraints of the individual. The actor is “recreated” out of production necessity, and the face — not the person — is what matters (see Fast&Furious 7, Paul Walker and the New Right to the Digitalized Image, Wired Italia, April 2, 2025).
The result is a stark dualism: on one side, social, cultural, and therapeutic potential; on the other, the tangible risk of manipulation, economic exploitation, and loss of control over one’s image — even after death.
Thus, while the Italian Data Protection Authority has set a solid legal safeguard (the GDPR), visual trust has been eroded, and we must respond with digital literacy, transparency, and governance tools. But if one’s personal image becomes a battleground, can we still talk about privacy and image rights?
Deepfake technology allows the manipulation of photos, videos, or audio of a real person through AI, creating falsified but often realistic content. This raises serious legal issues relating to the protection of personal identity, making the consequences of a distorted use of artificial intelligence all too tangible.
Seeing is no longer believing!: The core mystery of the deepfake
“Seeing is no longer believing!”. This is how the Discussion Paper The Deepfake Menace: Legal Challenges in the Age of AI, published by the TRT World Research Centre, begins. The statement sounds almost like a warning: we must no longer trust what we see with our own eyes. Yet vision remains one of the five essential senses through which we know the world around us.
This summary opens the door to two possible scenarios: in the first, AI proves to be a highly effective tool for obtaining images and audio that, although not real, closely resemble reality and offer users significant benefits in terms of cost savings and reduced time for the creation and production of photos and videos (think of the final scenes of “Fast & Furious 7”, in which AI-generated images were used to celebrate the memory of the late actor Paul Walker). In the second, this form of reality falsification fuels the malicious intentions of individuals who end up publishing artificially manipulated material to promote the spread of fake news and the proliferation of disinformation.
What should we do, then? Clearly, we cannot solve the problem by avoiding reality altogether (an approach one might sum up as “Not seeing for believing”). On the other hand, deepfakes force us to navigate daily life with a sharper critical sense than was required not long ago. Compared to just a few years back, we are now far more exposed to the risk of encountering false information and images. We must therefore train ourselves, day by day, to identify reliable sources of information (a different mantra: “Training ourselves to see in order to believe”).
Can we at least protect our personal image?
In Italy, our Constitution protects certain fundamental values whose safeguarding may potentially conflict in the context of the “deepfake phenomenon”.
On one side, “the Republic recognizes and guarantees individuals inviolable rights” under Article 2 of the Italian Constitution, including — according to case law — the right to personal identity, understood as “the right to be oneself” or to demand a truthful representation of oneself in social life. The Italian Supreme Court, in its ruling no. 3769 of June 22, 1985, put it this way: “Each individual has an interest, generally deemed worthy of legal protection, in being represented, in social life, with their true identity, as it is known or could be known in the general or particular social reality, applying the criteria of normal diligence and subjective good faith; that is, an interest in not having their intellectual, political, social, religious, ideological, professional, etc. heritage altered, distorted, obscured, or contested as it had been expressed or appeared, based on concrete and unequivocal circumstances, destined to be expressed in the social environment”.
On the other side, “everyone has the right to freely express their thoughts by word, in writing, and by any other means of dissemination” (Article 21 of the Italian Constitution). Naturally, deepfakes are also a means of expressing one’s thoughts, but — like speech and writing — they are subject to certain limits aimed at safeguarding fundamental principles and rights guaranteed to all citizens (found not only in the Constitution but also in supranational treaties such as the Treaty on European Union and the European Convention on Human Rights, both directly binding in Italy under Article 117 of the Italian Constitution).
Anyone who appears in a deepfake without having given consent loses control over their image, which can be falsely attributed or distorted in the manipulated video. The victim may be depicted in contexts or behaviors that never occurred — for example, in embarrassing or compromising situations — damaging their dignity, honor, or privacy. Even if such content were immediately recognized as artificial, it could remain highly invasive, amounting to a kind of digital “identity theft”. For this reason, the law provides a range of safeguards in such cases.
Civil and criminal aspects of deepfake in Italian Law
In Italy, deepfakes can have legal implications both in civil and criminal law.
According to Italian civil law, the unlawful use of another person’s image in a deepfake does not require intent to profit or explicit defamation to be considered illicit. Italian law (Article 10 of the Italian Civil Code and Articles 96 – 97 of Law no. 633/1941 on copyright) protects the right to one’s own image and prohibits its publication or display without consent if it harms a person’s honor, reputation, or dignity. Distributing someone’s face without authorization, placing it in a false context, is inherently damaging and may give rise to civil liability for non-material damages under Article 2043 of the Italian Civil Code. The judge will assess the amount of damages based on various factors, such as the extent of dissemination, the target audience, and any offensive content involved (which may be absent without diminishing the harm caused by the misappropriation of personal identity).
According to Italian criminal law, Draft Law no. 1146/2024 proposes adding Article 612-quater to the Italian Criminal Code. Its first paragraph states that “Anyone who causes unjust harm to another by sending, delivering, selling, publishing, or otherwise disseminating images or videos of people or things, or voices or sounds, in whole or in part false, generated or manipulated through the use of artificial intelligence systems, capable of misleading as to their authenticity or origin, is punishable by imprisonment from one to five years”. The law is not yet in force (it was approved with amendments by the Chamber of Deputies on June 25 and awaits Senate approval and presidential promulgation), but it clearly shows lawmakers’ intent to actively address the challenges posed by AI. Importantly, the law does not require a specific motive on the part of the offender: the offense is complete if the described conduct, capable of misleading the public as to the authenticity or origin of the shared material, causes unjust harm to the victim (such as reputational damage or other rights violations).
The European Regulatory Framework and supranational protection
At the European level, deepfakes are also specifically regulated to protect affected individuals.
In terms of privacy, the GDPR governs the use of one’s image by third parties: a facial image is a biometric personal data “resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person” (Article 4, para. 1, point 14). Its use without the person’s consent is unlawful, and the injured party is entitled to compensation. In such cases, the individual must always be guaranteed the rights of access, rectification, erasure, and objection to processing. However, these rights are generally exercised ex post, after the deepfake has been disseminated online, making them less effective in repairing reputational harm.
The Artificial Intelligence Act (“AI Act”), aimed precisely at regulating AI use within the EU (including deepfakes), explicitly defines a deepfake as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful”. The regulation does not prohibit their use but imposes transparency obligations on creators and distributors.
In particular, its Article 50(4) requires deepfake creators to clearly indicate that the content is artificially generated or manipulated, to avoid misleading the public about its non-authentic nature. There is, however, an exception for content that is part of an “evidently artistic, creative, satirical or fictional” work or programme. The risk in this provision is clear: it does not specify who should make this determination, nor the criteria for defining something as evidently artistic (for more, see: From Plagiarism to the Risk of Public Manipulation: “Deepfakes” – Canella Camaiora Law Firm).
Ultimately, the legitimacy of deepfake use depends on the context. Generally, artificial content published as satirical messages or with transparent labeling as artificially generated/manipulated is permissible. Absent these exemptions, the depicted person retains full rights to demand removal and seek compensation for damages.
The European Commission expects the AI Act to be fully applicable by August 2026, but our recommendation is to comply immediately with rules that will likely be enforced with increasing strictness.
© Canella Camaiora S.t.A. S.r.l. - All rights reserved.
Publication date: 14 August 2025
Textual reproduction of the article is permitted, even for commercial purposes, within the limit of 15% of its entirety, provided that the source is clearly indicated. In the case of online reproduction, a link to the original article must be included. Unauthorised reproduction or paraphrasing without indication of source will be prosecuted.

Joel Persico Brito
Graduated from the Università Cattolica del Sacro Cuore in Milan, trainee lawyer passionate about litigation and arbitration law.