What is the EU AI Act?

Everything about EU AI Act

Introduction

At a time when AI is transforming sectors and impacting daily lives, the European Union’s Artificial Intelligence Act (EUAI Act) stands as a pivotal regulation aimed at refining AI’s research and application. This groundbreaking regulation adopts a risk-focused approach in the EU, mandating that those who design and deploy AI systems adhere to certain guidelines. Its objective is to evaluate potential threats posed by AI and establish related developmental and operational rules. Particularly for foundational AI models like ChatGPT, there’s an emphasis on ensuring that the training data doesn’t infringe on copyright laws.

The AI Act is a cornerstone of the EU’s digital strategy, aiming to harness AI’s potential across healthcare, transport, industry, and energy. Its influence isn’t just confined to the EU—it’s anticipated to have a ripple effect globally. Some policymakers in the EU even view the AI Act as setting a worldwide benchmark, dubbing the initiative a strategic move in the global race to steer AI’s direction. Hence, the EU’s AI Act is poised to shape AI standards globally and will undeniably impact AI’s global research and deployment.

This blog will delve deeper into the nuances of the EU AI Act, discussing its breadth, distinctive elements, enforcement challenges, and potential ramifications for entities like ChatGPT.

Why regulate artificial intelligence? 

Artificial intelligence regulation is critical for various reasons:

  1. Ethical Considerations: While AI systems are robust, they lack innate moral reasoning and critical thinking. This deficiency may lead to judgments with serious ethical consequences. For example, an AI may unknowingly offer radical suggestions or judgments on sensitive issues. Robust rules may be a buffer, preventing AI from developing harmful biases or making unethical decisions.
  2. Clarity and Responsibility: Transparency in AI algorithms may be mandated by regulatory frameworks, demystifying decision-making processes. This openness assists consumers in comprehending and trusting AI results and provides a clear line of responsibility. When we can see how an AI system arrived at its conclusion, we can hold it—and, by extension, its developers—accountable for errors.
  3. Protection and Reliability: Unchecked AI applications, particularly in crucial industries such as healthcare, military, or pharmaceuticals, might pose serious threats. Consider an AI misdiagnosing a patient or a defensive system miscalculating a danger. Regulations guarantee that AI systems are created in accordance with strict safety and security requirements, reducing the possibility of catastrophic failures.
  4. Protection of Personal Information: AI often necessitates massive volumes of data for training and operation, potentially resulting in privacy violations. Regulations may protect people’s data while also ensuring appropriate data management methods.
  5. Preventing Monopolies: The AI sector is currently controlled by a few prominent companies that possess considerable information and resources. Without controls, these actors will unhesitatingly assume dominant positions, restricting competitors and generating fresh ideas. Rules can provide a balanced platform, promoting competition and averting overabundant centralization of control.
  6. Security: AI can be weaponized. From autonomous drones in warfare to AI-powered cyber-attacks, the potential for harm is vast. The establishment of regulations can impose limitations on the weaponization of artificial intelligence, thereby securing its utilization for the betterment of mankind instead of its detriment.

AI legislation seeks to balance encouraging innovation and protecting social values, ensuring that AI technologies benefit mankind.

What is the EU AI Act?

The EU AI Act aims to enhance Europe’s standing in AI excellence, ensure AI aligns with European values, and tap into AI’s industrial potential. Central to the AI Act is a risk-based classification system categorizing AI technologies based on potential health, safety, and rights risks. The framework includes four risk levels: unacceptable, high, limited, and minimal.

AI systems with low risk (limited and minimal), like spam filters or video games, have fewer requirements except for transparency. AI systems with unacceptable risks, such as public biometric identification, are almost entirely prohibited.

AI systems with high risk, like medical devices and autonomous vehicles, can be used but must adhere to strict regulations, including rigorous testing, robust data documentation, and human oversight.

The legislation also covers general-purpose AI, like ChatGPT, that can serve various purposes with varying risk levels.

To whom does the EU AI Act apply?

The EU AI Act will affect suppliers, importers, distributors, users, and manufacturers that sell AI-related goods. This implies that these laws will affect nearly everyone putting an AI system into the EU market or using its output, regardless of location.

The extent of obligations for providers will differ based on the level of risk inherent in the AI system. The principle followed by the EU AI Act is that higher-risk systems will entail more stringent requirements.

While most rules are directed at high-risk AI systems and their providers, certain provisions also concern those dealing with low- or minimal-risk AI systems. Moreover, the criteria for identifying high-risk AI systems will evolve over time as the Commission is empowered to expand this category through delegated acts, provided specific conditions are met.

Furthermore, users might find themselves subject to provider obligations under specific circumstances.

What is the scope of the EU AI Act? 

The scope of the AI Act is primarily governed by the subject matter to which the regulations apply, and it focuses on the particular usage of AI systems and the hazards connected with them. The AI Act establishes a comprehensive “product safety regime”. It categorizes artificial intelligence applications into three risk categories: applications and systems that pose an unacceptable danger, high-risk, and limited-risk applications.

The AI Act would impose obligations on AI system creators and deployers and demand high-risk AI systems to conform to specific legal criteria.

The EU’s AI policy is already making ripples throughout the world, and it has the potential to become a worldwide standard for deciding the degree to which AI has a positive rather than harmful impact on people’s lives. The AI Act is still being discussed, and stakeholders and experts have proposed many changes.

Recently, the European Parliament passed modifications to broaden the scope of the EU AI Act in June 2023. The act will also investigate AI applications in the public sector and law enforcement. While stakeholders and experts essentially endorse the Commission’s plan, several changes are suggested.

Things to know about EU AI Act

Important things to know about EU AI act

The introduction of the EU AI Act is a significant stepping stone in the regulation of the usage of AI within the European Union. The main aim of the act is to strike a balance between managing risks associated with AI and harnessing the potential benefits of this unique technology. If you think that you have understood everything about the act, think again because the following pointers shall provide you with some interesting facts about the EU AI Act that you must be aware of:

  • It is a global first. By introducing this act, the EU is officially leading in AI governance, considering the EU AI Act is the world’s first comprehensive regulatory framework for AI. The development of this act has been an exciting story in itself, considering that the drafters had to rewrite their plans for AI in 2021 after the enormous success of the introduction of ChatGPT and other like platforms. However, the EU law is back in a more improved form and has been introduced to the public. On June 14, the European Parliament adopted a negotiating position on the draft EU AI Act. The EU institutions are expected to reach a final agreement on the act by the end of 2023. 
  • Risk-based classification. The unique feature of this act is that it classified AI systems according to the risks they are capable of posing to others. The higher the risks associated with AI, the more requirements it must fulfil. Regulators are seen to be particularly interested in privacy, how the data is sourced, and its usage to train models free from any biases. Therefore, the main aim behind this classification is to promote a transparent mechanism surrounding AI through the surveillance of its development and use. 
  • Human Oversight. The European Parliament has repeatedly emphasised that AI systems must be overseen by humans instead of going through the automation path. The encouragement of a human-centric approach is for the prevention of harmful outcomes and the encouragement of the safe use of AI. 
  • Ban on unacceptable risks. Through the act, a clear line has been drawn between AI systems posing ‘acceptable risks’ and AI systems posing ‘unacceptable risks. The understanding of ‘unacceptable risks’ includes voice-activating toys capable of invoking dangerous behaviour in children, social scoring systems, and real-time remote biometric identification systems. 
  • Exceptions to the Rule. Some exceptions to the ban on unacceptable risks have been envisaged within the act. For example, the ‘post’ remote biometric identification systems can prosecute serious crimes, provided the requisite court approval.
  • High-Risk AI systems. Within the risk-based classification, AI systems can negatively affect the safety or the fundamental rights, which have been defined as ‘high risks’. This includes using AI in aviation, medical devices, critical infrastructure management, and legal interpretation. 
  • Registration and Assessment. The High-Risk AI systems are required to be adequately registered within the EU database. The registration shall only be done once the systems have been subjected to rigorous assessment before their introduction within the society and throughout their lifecycle. 
  • Transparency for Generative AI. Generative AI systems, such as ChatGPT, are required to comply with additional transparency requirements. This includes disclosing that the content is AI-generated and making adequate design changes to prevent illegal content generation. 
  • Limited risk AI systems. In contrast to high-risk AI systems, limited-risk AI systems have minimal transparency requirements to be complied with. The main requirements are that the users should be made aware of their interaction with AI, especially when the data involves generating or manipulating images, audio, or videos. 
  • Role of national authorities. The responsibility of enforcing the contents of the act has been provided to the EU national authorities. This means that the authorities have the power to investigate and sanction the organizations that violate the regulations under the act. 

EU AI act and ChatGPT

As already discussed above, the risk-based approach of the EU AI Act has been the highlight of the act. 

When it comes to unacceptable risks, they have been banned by default and include the following AI forms within their ambit:

  • AI systems use subliminal techniques or manipulative or deceptive techniques to distort behaviour
  • AI systems exploiting the vulnerabilities of individuals or specific groups
  • Biometric categorization systems based on sensitive attributes or characteristics
  • AI systems used for social scoring or evaluating the trustworthiness
  • AI systems used for risk assessments predicting criminal or administrative offences
  • AI systems creating or expanding facial recognition databases through untargeted scraping
  • AI systems inferring emotions in law enforcement, border management, the workplace, and education

Several lawmakers have expressed concerns about AI applications like ChatGPT not being a part of this list since they have not been automatically considered to be unacceptable or high-risk. 

However, compliance requirements have been imposed on developers/providers of “foundation models”, which are large language models with generative AI, like Microsoft-backed OpenAI ChatGPT and Google Bard. The developers are required to complete safety checks, data governance measures and risk mitigations before the public use of their applications. Further, they must ensure that the training data used for their systems are not violating copyright law. 

Therefore, the applications shall be subject to data governance requirements to examine the suitability of data sources and the possibility of biases.

While concerns have been related to the large amount of computing power AI systems used by a platform, i.e., the physical machines providing computing power, are often known as “compute”. Large language models like ChatGPT are said to have exponentially increased their use of computing with every new version, eventually leading to an improvement in their performance and capabilities. However, the EU law has not envisaged any additional safety burdens on the AI systems based on the amount of computing they use.

Interestingly, Stanford University has recently concluded that none of the current large language models used in AI tools, including OpenAI GPT-4 or Google Bard, comply with the EU AI Act. In fact, some providers scored less than 25% for meeting the AI requirements, with only Hugging Face/BigScience scoring above 75%. The lack of transparency, copyrighted training data, emissions, energy used, and strategies for mitigating possible risks are some of the factors the current models lack. Therefore, it shall be essential to witness how the companies make the appropriate changes and make their models more compliant with the EU AI Law. 

Key enforcement issues of the EU AI Act

The implementation and enforcement details of the EU AI Act have not been thoroughly discussed, and they vary significantly in the proposals from the Council, Commission, and Parliament. The Parliament’s proposal suggests centralizing AI oversight at one agency per member state and expanding the role of a coordinating AI Office, which differs from the Commission and Council’s ideas. 

All three proposals aim to create an AI auditing system, but none have fully committed to it, making its success uncertain. Additionally, the role of civil liability is still undecided, and these issues need careful attention and discussion because the success of the EU AI Act relies on a well-designed enforcement structure, regardless of the specific AI systems that are regulated or banned.

Should there be a single or multiple national surveillance authority?

The Parliament’s AI Act significantly changes how market surveillance is conducted within the EU and its member states. The Parliament proposes having only one national surveillance authority (NSA) in each member state, unlike the Council and Commission versions, which allow member states to create multiple market surveillance authorities (MSA) as needed. The Parliament’s approach involves selecting specific MSAs for areas like finance and law enforcement, while member states must establish a single NSA for enforcing the AI Act. 

This centralized approach may facilitate talent hiring, expertise building, and effective enforcement, with easier coordination between member states. However, it could lead to challenges in governing algorithms used for hiring, workplace management, and education, as different authorities will oversee them than human actions in the same areas.

Both approaches have their pros and cons, and it’s crucial to carefully consider the trade-offs. Centralization through a single NSA may enhance coordination. Still, it could also result in potential complications in interpreting and implementing the AI Act due to the separation of AI experts and subject matter experts in different agencies. Given the far-reaching impact of government oversight on the AI Act, it’s essential to prioritize discussions on this issue and not delay it in trialogue discussions.

Is the AI Act likely to foster the development of an AI evaluation ecosystem?

Are current AI laws enough to regulated AI? 

The Parliament’s AI Act proposes a two-step mechanism for enforcing the law. The first is government market surveillance, which involves monitoring and enforcing AI regulations. The second is the approval of organizations, known as “notified bodies,” to independently review and certify high-risk AI systems. This is meant to create an ecosystem of independent AI assessment, leading to more transparent and fair AI applications. These independent reviewers would assess whether AI systems meet the requirements set by the AI Act.

This approach aims to create an independent AI assessment ecosystem that promotes transparency, effectiveness, fairness, and risk management in high-risk AI applications. However, it’s uncertain if the current AI Act proposals will fully support this ecosystem. Many high-risk AI providers can self-attest that their systems meet the AI Act’s requirements, making the process faster and more specific than going through an independent review. It’s not strictly required even when independent review is encouraged for specific biometric AI systems.

The EU should carefully consider whether investing in the notified body ecosystem, with its limited scope, is worth the effort of implementation. Instead, the focus should be on strengthening the oversight powers of the government market surveillance authorities to directly check and evaluate AI systems’ compliance with the AI Act’s regulations, including demanding access to data and trained models. This would ensure more effective enforcement of the AI Act.

How does individual redress affect AI outcomes?

The AI Act has significantly changed how complaints, redress, and civil liability for AI system harm are handled differently. The Commission’s initial proposal lacked a path for individual complaints, while the Council’s version allows individuals and organizations to submit complaints to the relevant market surveillance authority. The Parliament’s proposal introduces new requirements, such as the right to be informed if someone is subject to a high-risk AI system and an explanation if they are negatively affected. Individuals can also complain to their national surveillance authority and seek judicial remedy if their complaints are not addressed.

To address civil liability, a new AI Liability Directive aims to clarify the rules for holding parties accountable for AI-caused damage when there is no contract. Challenges arise in assigning responsibility, particularly for “black box” AI systems with opaque decision-making. The directive proposes rules for the disclosure of evidence. It outlines criteria to prove a defendant’s fault for non-compliance with the AI Act or other EU rules, influencing AI system outputs and causing harm to the claimant.

However, the effectiveness of individual redress processes is uncertain.

The right to an explanation might lead companies to use simpler AI models to avoid legal issues, and it may be challenging for individuals to identify AI-caused harm. Additionally, limited legal support services could hinder civil liability cases for AI-related damages. Non-profit organizations and consumer rights groups may assist in enforcing the AI Act, but not all plaintiffs may afford specialized legal help. Policymakers should carefully consider these factors when deciding on the reliance on individual redress as an enforcement mechanism.

How will the EU plan impact the U.S.?

The EU’s plans to regulate AI and pass the EU AI Act will likely impact the U.S. approach to AI regulation. There is a growing disparity between the U.S. and the EU regarding their regulatory efforts. The EU has proactively implemented laws concerning data privacy, online platforms, e-commerce, and other areas, while the U.S. lacks similar comprehensive legislation.

The passage of the EU AI Act may present challenges for the U.S. in passing its own AI regulations. Companies operating in both markets may find it challenging to comply with two different sets of rules, leading to potential opposition from corporate interests. To avoid this, U.S. lawmakers are advised to familiarize themselves with the EU Act and seek areas of alignment in standards to ensure the smooth implementation of laws.

There is hope for convergence between the U.S. and the EU on AI regulation if they can find common ground on standards and principles.

 Even if there are differences in the level of detail in their legislation, having shared principles can foster alignment and cooperation between the two regions. As interest in AI regulation grows among D.C. policymakers, aligning with the EU’s efforts could lead to greater global coherence in AI regulations.

Can the EU Act bring law and order to AI? 

As governments struggle to cope with the hazards and advantages of AI, the European Union is well ahead in terms of the first rules governing AI. To tackle the significant concern of copyright infringement in social media platforms, the law would require AI chatbot developers to share all works used to train them by scientists, singers, artists, photographers, and journalists. They must also demonstrate that everything they did to teach the machine was legal.

Here, the act focuses on Foundation models, trained on massive quantities of data, and underlies generative AI tools like ChatGPT. According to the European Parliament proposal, services such as ChatGPT would be required to declare the origins of any data used to “train” the machine.

To fight the significant danger of copyright infringement, the law would require AI chatbot developers to share all works used to train them by scientists, singers, artists, photographers, and journalists. They must also demonstrate that everything they did to teach the machine was legal.

As the EU is prominent in global tech regulation, GDPR and the AI Act will carry weight. However, other nations such as the United States, the United Kingdom, and China are already planning to implement their measures, which will imply more work for IT corporations, enterprises, and other organizations under its purview.

There will undoubtedly be much additional and disparate legislation outside the EU bloc that businesses must deal with. While the EU act will set the bar in many ways, it is clear that several countries outside the European Union are drafting their novel requirements, which companies will also have to deal with.

Conclusion

Hence, the current global status of the AI Act suggests that the EU is intentionally pushing its influence in the AI race. Thus, this legislative attempt will drive worldwide AI standards conversation and AI research and implementation.

The broad scope of the EU AI Act emphasizes balancing AI innovation with data protection and privacy rights. As AI systems become more complex, rules must handle several issues. The EU AI Act primarily affects high-risk systems, although it affects all AI deployment tiers. The act applies to suppliers, producers, and consumers worldwide. 

It is also observed that the AI Act will apply compliance and rules to AI applications like ChatGPT to ensure responsible and ethical AI development. This sophisticated and comprehensive AI governance framework recognizes AI deployment’s complexity.

In conclusion, the EU AI Act shows the EU’s dedication to harnessing and regulating AI. The act, the world’s first comprehensive AI legislative framework, addresses ethical, privacy and data issues. The EU AI Act promotes responsible AI governance to keep AI a force for good and growth in our quickly changing technological world.

Share this blog:


    T&C

    If the form is not submitted, use the button below

    Join LegaMart's community of exceptional lawyers

    Your global legal platform
    Personalised. Efficient. Simple.

    © 2023 LegaMart. All rights reserved. Powered by stripe