Saturday, August 16, 2025

Ms. Justine Gonzales

Intern, Institute of International and Comparative Law

Introduction

Artificial intelligence (AI) is no longer a distant promise of the future — it is a central force shaping our daily lives. From facial recognition and virtual assistants to algorithmic decision-making in recruitment, healthcare, or public services, AI is increasingly influencing the way we work, communicate, and participate in society. These developments offer remarkable opportunities for efficiency and innovation, but they also raise profound legal and ethical questions.

In particular, AI challenges the traditional balance between technological progress and the protection of fundamental rights. Issues such as data privacy, online surveillance, algorithmic bias, and consumer manipulation have prompted urgent debates around the world. How can we ensure that AI remains under human control? Does it respect dignity, autonomy, and equality?

This research focuses on how two very different legal systems — the European Union and Vietnam — are attempting to answer these questions. It compares their emerging frameworks for AI regulation, with particular attention to the right to privacy, the right to disconnect, access to information, and consumer protection. By analysing these issues through a legal lens, this study aims to shed light on the broader challenge of regulating intelligent technologies in a way that protects people and strengthens public trust.

I. The European Union 

The European Union has positioned itself as a global leader in AI regulation. Rather than taking a purely economic or security-based approach, the EU’s strategy is firmly anchored in the protection of fundamental rights and democratic values.

At the heart of this approach is the Artificial Intelligence Act (AI Act), the first comprehensive legal proposal of its kind, adopted in March 2024. The AI Act introduces a risk-based classification of AI systems: those posing « unacceptable risks » are therefore prohibited, such as manipulative social scoring or real-time biometric surveillance. « High-risk » systems — including AI used in hiring, education, or law enforcement — are subject to strict requirements related to transparency, data quality, human oversight, and cybersecurity. Systems with limited or minimal risk must comply with basic transparency rules, such as disclosing when a user is interacting with an AI.

In parallel, the General Data Protection Regulation (GDPR) continues to play a central role in regulating the use of personal data by AI systems. The GDPR upholds principles such as purpose limitation, data minimization, and user consent, and provides individuals with rights over their personal data, including the right to access, rectify, or delete it. Notably, Article 22 of the GDPR gives individuals the right not to be subject to automated decision-making without meaningful human intervention.

Together, these instruments aim to ensure that AI development in Europe is lawful, ethical, and trustworthy. While the EU does not yet have a legally binding right to disconnect at the Union level, this right is increasingly recognized in national legislation (e.g., France) and supported by the European Parliament and social partners.

The EU model is therefore built on a comprehensive, enforceable legal structure, supported by independent supervisory authorities and judicial review mechanisms. It combines innovation with protection, technical standards with ethical principles.

II. Vietnam

Vietnam has experienced a rapid digital transformation in recent years, positioning itself as a key actor in Southeast Asia’s digital economy. In line with its national strategy for digital development, Vietnam has moved toward creating a more structured legal framework for artificial intelligence. While less mature than the European Union’s regulatory system, the Vietnamese approach is quickly evolving.

The most significant development is the Law on Digital Technology Industry (Digital Technology Law), passed by the National Assembly on June 14, 2025, and set to enter into force on January 1, 2026. With this law, Vietnam becomes the first country in the world to enact a standalone legal instrument exclusively dedicated to the digital technology sector. Drafted by the Ministry of Science and Technology, the law supports the national digital transformation strategy by fostering domestic innovation, boosting competitiveness, and promoting global integration.

As part of the broader « Make in Vietnam » initiative — which aims to establish 150,000 digital technology enterprises by 2035 — this law reflects the government’s ambition to make Vietnam a regional leader in digital governance. The legislation consists of 8 chapters and 73 articles, and notably, Section 5 of Chapter IV specifically regulates artificial intelligence.

This section lays the groundwork for ethical, secure, and sustainable AI development in Vietnam. It affirms general principles such as safety, transparency, non-discrimination, and user rights protection. Importantly, the law introduces a risk-based classification of AI systems, mirroring the approach taken in the European Union’s AI Act. AI systems are categorized into high-risk, high-impact, and standards systems, with regulatory obligations that increase proportionally to the potential societal impact and level of automation.

While this alignment with European regulatory thinking is a major step forward, the Vietnamese framework remains at an early implementation stage. Key concepts — such as algorithmic explainability, data governance standards, and human oversight — are still being defined through forthcoming decrees, technical guidance, and drafts. Additionally, enforcement mechanisms and institutional oversight are not yet fully established, and the law’s provisions, though ambitious, remain largely programmatic at this stage.

III. Legal and Ethical Tensions Raised by AI Regulation

1. Privacy and the Right to Disconnect

One of the most pressing issues raised by AI is the erosion of privacy. AI systems often rely on vast quantities of personal data to function — including behavioural patterns, biometric information, and location history. This creates new forms of algorithmic surveillance, particularly in the workplace or in public services, where individuals may be monitored without fully understanding or consenting to the process.

The right to disconnect, which protects individuals from the constant digital presence required by connected devices and platforms, is closely tied to this concern. Without adequate limits on data collection and use, the right to disconnect risks becoming illusory — a right in theory but not in practice.

In the EU, the interplay between data protection and the right to disconnect is becoming more visible in labour law debates. AI monitoring tools used to track productivity or communication habits can interfere with workers’ private lives and generate legal challenges under the GDPR and the Charter of Fundamental Rights.

The right to privacy is enshrined in Article 7 of the Charter, while Article 8 specifically guarantees the right to the protection of personal data. Furthermore, Article 16 of the Treaty on the Functioning of the European Union (TFEU) empowers the EU to adopt legislation on data protection, which has led to the adoption of key regulatory instruments such as the GDPR. Additionally, labour law instruments such as Directive 2003/88/EC on working time provide a legal framework for regulating working hours and rest periods, which are particularly relevant in the context of telework and digital disconnection.

This concern has been echoed by the European Court of Human Rights in Bărbulescu v. Romania (2017, no. 61496/08), where the Court found that an employer's monitoring of an employee’s online communications without proper notice violated the right to privacy under Article 8 of the European Convention on Human Rights. The ruling emphasized that even in the workplace, individuals retain a reasonable expectation of privacy, and any surveillance, including by AI tools, must be proportionate, transparent, and justified.

At the national level, several EU Member States have taken steps to formalize the right to disconnect. In France, for example, Law No. 2016-1088 of August 8, 2016, commonly referred to as the “El Khomri Law," grants employees in companies with over 50 employees a legally recognized right to disconnect, aimed at preserving work-life balance and protecting workers from constant digital connectivity.

French case law has also reinforced this right. For instance, the “Cour de Cassation” of Nimes ruled on October 9 2024 (no. 23-19.063) that an employee could not be sanctioned for failing to respond to work-related messages during paid leave, affirming that constant availability cannot be required. 

In Vietnam, the issue is less visible in public debate, but equally present in practice. The widespread use of messaging apps, mobile monitoring, and digital performance tools creates similar risks, especially in urban workplaces. However, in the absence of clear legal standards or collective protections, the right to be offline remains largely undefined and unenforceable.

The right to privacy is formally enshrined in Article 21 of the 2013 Constitution, which guarantees respect for private life. At the legislative level, the Law on Cybersecurity (Law No. 24/2018/QH14) and the Law on Protection of Consumer Rights (Law No. 19/2023/QH15) have reinforced this framework by regulating the processing of personal data. For instance, article 21 requires entities handling data to ensure the confidentiality and security of personal information.

Moreover, the right to disconnect is neither mentioned nor protected under Vietnamese law. On the contrary, Article 26 of the Law on Cybersecurity obliges digital service providers to closely cooperate with state authorities in monitoring and censoring online content, thereby expanding state surveillance and narrowing the scope of personal privacy.

2. Access to Information in a Digitally Filtered World

While AI can be used to facilitate access to information — through search engines, translation tools, or content recommendations — it can also restrict it. Algorithmic curation tends to filter and prioritize information in ways that are not always transparent or accountable.

This creates a tension between the desire to protect individuals from digital overexposure (linked to the right to disconnect), and the need to preserve free and equal access to information, especially in democratic and civic contexts.

On one hand In the EU, the Digital Services Act addresses some of these concerns by requiring platforms to provide users with information about how content is recommended, ranked, or moderated. Users must be informed of the presence of automated systems and offered alternatives when possible.

The right of access to information is a fundamental principle enshrined in Article 11 of the Charter of Fundamental Rights of the European Union, which guarantees freedom of expression and the right to receive and impart information in a democratic framework. Furthermore, the GDPR explicitly provides in Article 15 that any person has the right to access their personal data. This provision imposes increased transparency on data controllers, especially regarding automated processing and artificial intelligence algorithms, which must be explained in an intelligible manner.

However, this right to information does not justify continuous surveillance or the imposition of a state of permanent connectivity. The European legal framework seeks to preserve a crucial balance between access to information and the protection of privacy. Article 5(1)(c) of the GDPR sets forth the principle of data minimization, meaning that only data strictly necessary for the intended purpose should be collected and processed. This principle helps to prevent excessive data collection and indirectly supports the right to disconnect by limiting the need for constant connectivity and monitoring.

A landmark decision by the German Federal Court of Justice in May 2013 (VI ZR 269/12) illustrates how algorithmic filtering can infringe individual rights. The Court held that Google’s autocomplete suggestions could violate a person’s right of personality if the automated completions were defamatory or misleading, even though they were based on previous user inputs. Because Google designs and operates the algorithm, it was deemed responsible for its effects. This case highlights the broader risk that algorithmic curation may distort access to information, while also stressing the importance of transparency and accountability in AI-driven recommendation systems. More recently, in the Dun and Bradstreet Austria case of February 27, 2025, the Court upheld that in case of automated decision-making, an explanation of the procedure and principles actually applied must be provided to the data subject under Article 15(1)(h) GDPR. 

On the other hand, Vietnam’s legal framework has recently introduced transparency obligations for algorithmic systems under consumer protection rules. Specifically, Articles 17 to 20 of the Law on Protection of Consumer Rights regulate the collection, processing, and use of consumer data, requiring businesses to ensure transparency and protect consumers’ personal information. However, these obligations remain general and declarative, and there is still no independent mechanism to verify whether they are actually met.

Moreover, state control over information dissemination adds another layer of opacity, which limits citizens’ ability to access diverse, uncensored content online. While the Law on Cybersecurity mandates digital service providers to cooperate with state authorities to monitor and, where necessary, restrict online content, the criteria and procedures lack transparency. This is particularly emphasized in Chapter 2 entitled “Protection of Cybersecurity of Information Systems Critical for National Security”, which prioritizes state security interests. This framework is further supported by Article 6 of the Law on Protection of Consumer Rights, requiring that consumer rights protection activities do not infringe upon the interests of the State and lawful rights of organizations and individuals. As a result, these combined legal provisions prioritize state control and social ethics over full transparency and free access to information, limiting the effective enforcement of consumers’ rights and algorithmic transparency.

3. Consumer Protection in AI-Driven Markets

Consumers are increasingly exposed to AI-powered systems in areas such as online shopping, targeted advertising, and automated customer service. These systems can generate benefits — convenience, speed, personalisation — but also new vulnerabilities, including manipulative design, discriminatory pricing, and information asymmetries.

In the EU, the AI Act and the revised Consumer Rights Directive 2011/83/EU aim to strengthen safeguards in this area. Providers of high-risk AI systems will be required, under Chapter III of the AI Act (“High-risk AI systems”), to conduct conformity assessments, provide clear information to users, and allow for human oversight. Article 16 of the AI Act lists key obligations, including compliance checks, documentation, and system registration. Additionally, the Unfair Commercial Practices Directive (Directive 2005/29/EC) may be used to tackle deceptive uses of AI in marketing by prohibiting unfair commercial practices that mislead consumers (article 5).

In the landmark case of Amazon EU Sarl (C-649/17, 2019), the Court of Justice of the European Union established that online platforms can be held liable as professionals towards consumers, particularly regarding transparency and protection against unfair commercial practices. This ruling supports the application of the Unfair Commercial Practices Directive to AI-powered systems such as personalized advertising and automated customer service, ensuring that consumers receive clear and truthful information, and are protected from manipulative or discriminatory practices.

Vietnam’s new Law on Consumer Protection addresses similar concerns. It introduces, for the first time, the requirement that digital service providers disclose the use of automated decision-making tools and guarantee access to accurate, comprehensible information. Article 39 specifically mandates that consumers be informed about the use and operation of such systems in a clear and accessible manner. In  the same way, the Personal Data Protection Decree (PDPL), also enacted in 2023, establishes general principles of transparency, data minimization, and purpose limitation applicable to data controllers, including those using AI. Article 3 of the PDPL outlines these core principles. However, enforcement remains a challenge: consumer education is limited, redress mechanisms are underdeveloped, and many AI providers operate beyond national jurisdiction.

In both systems, the central challenge lies in empowering users while ensuring transparency and accountability in increasingly complex digital ecosystems.

IV. Comparing the Two Legal Models

Both the EU and Vietnam share common objectives in promoting responsible AI development that fosters innovation while addressing ethical, security, and societal risks. Each framework is based on a risk-oriented approach to AI classification, with Vietnam’s new Digital Technology Law introducing a model similar to the one in the EU’s AI Act. However, the EU places stronger emphasis on protecting fundamental rights through binding legal instruments such as the GDPR and AI Act, whereas Vietnam’s approach to user protection lacks comparable enforceability and institutional support. Governance and implementation also differ significantly: the EU benefits from robust enforcement by independent authorities and detailed procedural mechanisms, while Vietnam’s framework remains broad and awaits concrete operational decrees and institutional development. Additionally, EU regulation applies to a wide range of actors across sectors, whereas Vietnam focuses mainly on developers and enterprises under its “Make in Vietnam” strategy. Although Vietnam clearly draws inspiration from the EU, its regulatory framework remains in an early and still vaguely defined phase, particularly regarding definitions, enforcement, and oversight. 

In brief, the EU presents a mature model of procedural rights and risk mitigation, while Vietnam is still shaping its normative landscape, reflecting deeper differences in legal traditions, governance, transparency, and citizen participation.

Conclusion: A Shared Challenge, Divergent Paths

Artificial intelligence is challenging existing legal systems worldwide. Both the European Union and Vietnam acknowledge the necessity of AI regulation, yet their approaches diverge significantly. The EU draws on a longstanding tradition of rights-based governance, establishing a binding, detailed, and extraterritorially effective legal framework. In contrast, Vietnam is crafting its regulatory response among rapid technological change and institutional transformation, currently relying on a general, non-binding strategy inspired by the European model.

Both jurisdictions face common challenges: protecting privacy, ensuring a meaningful right to disconnect, safeguarding access to information, and defending consumer interests against automation. However, the tools and principles employed to address these issues differ, with effectiveness hinging on institutional strength, legal clarity, and civil society engagement.

Looking forward, AI regulation will demand not only national efforts but also global coordination. As technologies transcend borders and legal systems increasingly overlap, fostering dialogue, transparency, and mutual learning between legal cultures will be essential.

Ultimately, AI regulation goes beyond machines; it reflects the kind of society we aspire to build — one where innovation respects human dignity and technology empowers rather than dominates.


0 Comments:

ABOUT IICL-UEL BLOG

This is an academic blog of the Institute of International and Comparative Law, University of Economics and Law, Vietnam National University, Ho Chi Minh City. In our blog, we analyze contemporary legal issues such as international trade, digital technology, environmental protection, the green economy, and others.

RECENT POSTS