mask
Regulating deepfake: legal pathways for Vietnam
As deepfake technology rapidly evolves, Vietnam faces mounting legal and social risks in the digital space. Without a dedicated framework, current laws struggle to address its complexity. Drawing on international models, this article explores pathways for Vietnam to regulate deepfake through risk-based classification, platform accountability, victim protection, and the building of domestic technological capacity.

Chu Manh Hung[1] and Nguyen Son Ha[2]

Interface of an AI deepfake video creation software

Introduction

Deepfake is a technology that applies artificial intelligence (AI) and deep learning to generate falsified images, audio and videos in ways increasingly difficult to distinguish from authentic content. Although originally developed for legitimate purposes, the uncontrolled spread and misuse of deepfake have posed numerous challenges worldwide, and Vietnam is no exception - particularly in the context of rapid digital transformation and lack of a complete legal framework.

In theory, deepfake technology gives rise to new legal issues such as information verification, protection of personal rights, and restrictions on freedom of expression in the digital environment - issues that have not yet been raised for extensive discussion in Vietnam’s legal scholarship. From a legal standpoint, Vietnam has yet to enact a specialized law governing deepfake. Existing provisions merely address its consequences indirectly, leaving significant gaps in the state management work. Meanwhile, deepfake is increasingly exploited for financial fraud, dissemination of false information, distortion of facts, infringement of personal privacy, and even threats national security, while detection and enforcement remain uncertain.

This article employs a legal research approach to analyze issues related to cyber security, handling of fake news, responsibilities of digital platforms as well as provisions of criminal and administrative laws that can be applied to acts of creating and spreading deepfake content. It also assesses the implementation of these regulations, pointing out legal loopholes as well as challenges in controlling deepfake.

Beyond the legal aspects, this article also considers sociological and interdisciplinary aspects such as public awareness about deepfake risks, the ability to identify fake content, and the role of the media in shaping public awareness and opinion. The impacts of deepfake on privacy, personal reputation, information security and social order are also examined, forming a basis for proposing regulatory solutions that ensure a balance between protecting personal rights and promoting innovation in Vietnam.

Vietnam’s current legislation on deepfake control

So far, Vietnam has not enacted a specialized law regulating the creation, use or dissemination of deepfake content. As a result, the handling of deepfake-related acts is largely based on extending and interpreting existing provisions of the legal system.

The 2018 Cyber Security Law does not directly mention deepfake, yet it remains one of the key statutes for controlling such content in Vietnam’s cyberspace. In particular, Article 26 sets out the responsibilities of enterprises providing network services, including:

(i) Removing unlawful content when requested by competent authorities;

(ii) Providing users’ information, personal data and service-use records at the request of competent authorities; and,

(iii) Storing data in Vietnam and establishing a representative office or branch in Vietnam when required, for foreign enterprises.

Although deepfake is not expressly referenced, Article 26 provides a legal basis for requiring network service providers to cooperate in removing illegal, fabricated, misleading, obscene, pornographic or slanderous content generated by deepfake technology, as well as in tracing those who disseminate such content online.

However, as deepfake is a new form of digital simulation, there remains no clear legal definition or classification of its levels of danger. Consequently, the handling of deepfake cases continues to rely on conjectural application of existing provisions, which fail to keep pace with the complexity of AI. Moreover, procedures for handling violations, takedown timeframes, and the extent of cooperation from social media platforms remain riddled with legal and technical gaps.

From the perspective of criminal law, although Vietnam has not yet introduced specific provisions regulating the use of deepfake technology, several articles of the 2015 Penal Code (as revised in 2017) may still be applied to handle related acts, depending on the nature and severity of the violation, such as:

-  Offense of humiliating other persons (Article 155): applicable when deepfake content seriously insults an individual’s honor or dignity, such as inserting the victim’s face into pornographic or obscene videos.

- Offense of slander (Article 156): applicable when deepfake is used to fabricate false acts or statements in order to defame or damage another’s reputation, e.g., by depicting leaders or celebrities in sensitive or misleading content.

- Offense of illegally uploading or using information on computer or telecommunications networks (Article 288): applicable when deepfake creators or disseminators use networks to spread harmful information, especially in cases causing serious harm to national security or social order.

- Offense of disseminating debauched cultural products (Article 326): applicable when deepfake contains pornographic or obscene elements, especially if involving minors or distributed on a large scale.

However, limitations remain in the implementation of the above provisions to address deepfake-related acts in reality. Under Vietnam’s criminal law, offenders are subject to prosecution only when subjective elements, such as intent to insult or slander, can be established. Yet much deepfake content is automatically generated by AI, and the person disseminating it perhaps does not directly participate in its creation. In many cases, deepfake content uses images from the internet or fabricates fictional characters, making it unclear who the actual victim is and causing difficulties in identifying the infringed rights. Moreover, deepfake can be produced and spread by anonymous accounts through social media and cross-border platforms, making it difficult to trace offenders and prove actual damage, thereby reducing the effectiveness of criminal law enforcement.

At present, acts of disseminating deepfake content in Vietnam are mainly handled by administrative measures under Decree 15/2020/ND-CP on sanctions for administrative violations in the fields of post, telecommunications, radio frequency, information technology, and e-transactions, as amended by Decree 14/2022/ND-CP. Specifically, Article 101 of Decree 15 provides penalties related to the use of social media services, under which providing or sharing fabricated or false information is subject to an administrative fine of up to VND 30 million, along with a requirement to remove the false, misleading or otherwise unlawful content at the request of competent authorities.

However, this sanctioning mechanism remains a single-track administrative measure and does not reflect the severity and complexity of deepfake technology in today’s context. In many cases, deepfake not only causes severe harm to personal honor, dignity and property but also poses significant risks to information security and social order.

Solutions for Vietnam in developing a legal framework to control deepfake

Studying and selectively absorbing models from other countries constitute a necessary step for Vietnam to promptly address the legal and social risks posed by deepfake technology, while ensuring a balance between technological innovation and protection of human rights in the digital space. From international experience, several legal orientations and solutions below can be drawn for Vietnam.

Firstly, Vietnam now has a unique opportunity to establish a dedicated legal framework on deepfake, as it is not yet constrained by outdated regulations that many other countries struggle with. Instead of making piecemeal amendments to existing laws, Vietnam can develop a modern legal system that addresses the essence of the issue and keeps pace with technological realities.

This framework should be built on a risk-based classification of deepfake content, according to purpose and potential impact, similar to the tiered model in the European Union’s AI Act. Content likely to cause serious harm, such as non-consensual pornography, political distortion, impersonation of leaders, or financial fraud, should be categorized as high-risk and subject to stricter control mechanisms.

At the same time, the law should impose mandatory transparency obligations, requiring AI-generated products to be labelled to protect users from subtle manipulation. Mechanisms for verification and traceability must also be improved, enabling competent authorities to promptly order the removal of unlawful content, identify disseminators, and handle violators in a timely manner.

In addition, introducing a clear legal definition of deepfake is an essential first step for Vietnam to effectively regulate this technology.

Secondly, it is necessary to institutionalize platform responsibility and strengthen victim protection.

Drawing from international practice, particularly that of the United States, Vietnam can adopt several practical lessons in controlling deepfake content. A notable example is the takedown mechanism, which requires the removal of unlawful content within 48 hours after the receipt of a valid request from competent authorities. This mechanism not only reduces the immediate harmful effects of fabricated content but also underscores the cooperative responsibility of social media platforms and technology companies.

Moreover, the introduction of a “social obligation” mechanism for digital platforms (such as Facebook, TikTok, YouTube, etc.), covering content moderation, cooperation in investigation, data protection and victim support, should be formalized in the legal framework. The sense of “co-responsibility” must be emphasized to ensure that platforms cannot shirk their social duties - particularly as their profits derive directly from user data and behaviors.

Thirdly, Vietnam may consider adopting the “mandatory transparency” model. Instead of focusing solely on sanctions after violations occur, Vietnam could follow the European Union’s current approach of requiring mandatory transparency for AI-generated content, particularly deepfake. Regulations could stipulate that products using AI technology to alter images, voices or content involving personal appearance, identity or sound must be labelled and accompanied by a clear notice. This would enable people to distinguish between genuine and fabricated content at an early stage and help prevent the harmful spread of manipulated information.

In practice, such labeling does not require overly complex technology; many platforms such as TikTok, Meta and Google have implemented it successfully.

Fourthly, legislative efforts should be linked with the development of domestic technologies for deepfake control.

As Vietnam has not yet established sufficient technical infrastructure to detect and address deepfake content, developing technical standards and encouraging the participation of domestic technology companies are essential solutions. Existing regulations, such as the Cyber Security Law and Decree 14/2022/ND-CP, emphasize the responsibility of enterprises in detecting, storing information on, and cooperating in handling, violations. However, reality shows that Vietnam still lacks dedicated technological tools to analyze AI-generated content.

Vietnam should therefore: (i) establish national technical standards for digital content authentication, thereby providing a legal basis for enterprises to comply with regulations in detecting deepfake; (ii) create legal and financial frameworks to support Vietnamese enterprises in developing AI-based solutions to detect fake content, trace origin or integrate blockchain into information verification; and (iii) combine regulations with policies to boost startups, research institutes and technology companies in cooperating with competent authorities to build an “anti-deepfake technology ecosystem.”

This represents a “law accompanying technology” approach, which not only aligns with the existing legal system but also paves the way for effective risk management in the digital era.

Fifthly, special attention should be paid to victim protection and community education. Practical experience from the Take It Down Act in the United States shows that controlling deepfake content should focus on not only preventing and handling violations, but also protecting victims and educating the community. Vietnam can learn from this by supplementing regulations and implementing specific measures such as:

(i) Establishing victim support obligations: requiring social media platforms and competent authorities to publicize takedown procedures, provide legal counseling, and offer psychological support for victims, particularly women, minors and other groups most vulnerable to pornographic deepfake content; and,

(ii) Building community-based digital communication campaigns: organizing educational activities on AI awareness, providing guidance on how to distinguish between authentic and fabricated information and how to respond when exposed to deepfake, and promoting responsible behavior in cyberspace.

This approach helps prevent violations at their root while restoring justice and dignity for victims, an aspect that purely punitive measures often fail to address.

Sixthly, the law should require data storage and user identification. One notable lesson from China is the establishment of obligations on user identification and data storage when using AI-based content creation tools, particularly deepfake. Under China’s current regulations, individuals and organizations employing content synthesis technologies must verify user identities, retain content creation logs, and report violations upon request.

The aim of this policy is to minimize the anonymous dissemination of deepfake content, which has long been a major obstacle to investigation and enforcement. When users are clearly identified, authorities can more effectively hold them legally accountable in case content is misused for defamation, insult, pornography, or disruption of public order.

Vietnam could adopt a similar model by requiring social media platforms, AI content creation applications, and companies providing content storage services to implement user identity verification mechanisms and retain access records. This would not only serve as an effective management tool but also provide a solid legal foundation for tracing, deterring, and preventing violations at an early stage.

Seventhly, the legal definition of deepfake content and the criteria for assessing “misleading falsification” should be introduced without delay. Until now, Vietnam’s law has yet to provide clear rules for determining the falsity of content generated by deepfake technology. In practice, many deepfake products, though entirely fabricated, are so sophisticatedly crafted that they cause serious misunderstandings but remain difficult to address under current law. Therefore, it is a need to incorporate the concept of “misleading falsified content generated by technology” into the Cyber Security Law or the Press Law, and promulgate documents guiding the assessment of generative content (e.g., deepfake, speech synthesis, and synthetic imagery).

Such guidance should clearly stipulate the bases for assessing harmful impacts and material distortions, and should authorize the use of technological verification tools - such as AI forensics, watermarking, and similar methods - as admissible evidence in administrative or criminal proceedings.

Eighthly, binding legal mechanisms should be established for cross-border technology platforms, alongside measures to effectively address anonymous activities. A defining feature of deepfake content is its rapid dissemination through social media and global digital platforms. Most perpetrators rely on anonymous accounts, virtual private networks (VPNs) or overseas infrastructure, posing significant challenges to investigation. Therefore, regulations should require platforms operating in Vietnam to implement user identity verification mechanisms, particularly for accounts posting AI-generated videos or audio.

At the same time, a data-sharing system should be created among the Ministry of Science and Technology, the Ministry of Public Security, telecommunications service providers and digital platforms to enable the prompt tracing of deepfake-related violations. In addition, Vietnam should pursue negotiations to conclude memoranda of understanding with major platforms such as TikTok, Facebook and YouTube to ensure timely removal of unlawful content and sharing of relevant data in response to legal requests.

Ninthly, investment should target developing technical tools and specialized human resources for detecting, forensically examining and handling deepfake content.

As deepfake technology becomes increasingly sophisticated and difficult to detect with the naked eye, advanced technical tools are essential. Yet Vietnam still lacks dedicated centers and a sufficiently capable monitoring workforce. Accordingly, it is required to establish a Center for AI Content Detection and Forensics, and invest in AI video analysis software, systems for detecting watermarks, metadata, and audio manipulation.

In addition, issues related to technology security and generative AI content detection should be incorporated into training programs on journalism, information technology, cyber security and law, so as to build a competent workforce that meets practical demands.

Lastly, a risk classification system and corresponding legal measures based on usage context should be established. Not all deepfake content bears the same level of danger. Therefore, it is necessary to establish a system for classifying the risk levels of AI content by usage context, with categories including:

(i) High risk: deepfake in politics, election, national security, or involving children;

(ii) Medium risk: impersonation of celebrities or state officials, and other sensitive contents; and,

(iii) Low risk: entertainment and personal satire.

At the same time, the responsibilities of content disseminators, storage platforms, and regulatory authorities should be clearly defined, specifying cases that require early censorship, mandatory labeling, emergency removal or strict penalties. These efforts should be combined with community awareness programs on identifying fake information, thereby strengthening “social immunity” against deepfake products.

Conclusion

The promulgation of specific, forward-looking and technically feasible regulations will help Vietnam not only keep pace with global deepfake control trends but also safeguard social security, privacy and public trust in the information environment in the era of AI. This is an urgent requirement within the national digital transformation strategy and for ensuring cyber security.-

[1] LL.D., Hanoi Law University.

[2] LL.D., Law School, Hue University.

back to top