All Blog Posts / Data Protection

Navigating the EU AI Act: A Comprehensive Analysis and Compliance Guide

By Sebastian Schneider, Francisco Arga e Lima

Last Updated 30/05/2024

At a time when Artificial Intelligence (AI) is becoming increasingly integrated into our daily lives and business operations, understanding and complying with regulatory frameworks is crucial. The EU AI Act represents a pioneering step by the European Union (EU) to set a global benchmark in AI regulation. This comprehensive legislation aims to ensure that AI systems are safe, transparent and operate within strict ethical boundaries. As AI technologies evolve and proliferate, companies, developers and users will need to keep abreast of these regulations in order to not only foster innovation, but also protect fundamental human rights and adhere to product safety norms in the digital age.  

The EU AI Act is particularly significant as it complements existing laws such as data protection, digital services, contract law and intellectual property rights, with a particular focus on high-risk and safety-critical systems. Understanding this Act is essential for all stakeholders involved in the development and deployment of AI in the EU and beyond, as it shapes the landscape in which they operate. 


1 The AI Act: What you need to know

In this section, we dive into the heart of the EU AI Act, exploring its foundational elements. You’ll gain insights into the Act’s definitions and scope, understanding not just who it impacts but how it shapes your obligations. Through a closer look, we’ll explore the essential obligations and safeguards required for compliance to navigate the complexities of the Act. 

1.1 Definition and scope of the EU AI Act

In order to understand the details of the Act, it is important to understand the key concepts it uses. For this reason, we will first guide you through understanding whether and how the AI Act applies to you, to ensure that your AI applications are not only innovative, but also compliant and ethical.

1.1.1 AI Systems

AI systems are the cornerstone of the AI Act, meaning systems that fulfill five key criteria to qualify as AI: 

  • They are machine-based systems;
  • They are designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment;
  • They are designed for explicit or implicit objectives;
  • They infer, from the input they receive, how to generate outputs;
  • They can influence physical or virtual environments.

In practical terms, these systems leverage machine learning to perform tasks with a degree of autonomy. An AI system could range from a simple chatbot on your website to a complex algorithm predicting consumer behaviour. The AI Act regulates these systems with a risk-based approach, where the different players in the supply chain will be subject to different obligations depending on the risk inherent to the system that is made available or used within the territorial scope of the Act. 

1.1.2 General-purpose AI models

General-purpose AI (GPAIs) modelsare subject to specific regulation under the AI Act. These are defined with the following elements: 

  • AI models that display significant generality and is capable of competently performing a wide range of distinct tasks;
  • AI models that can be integrated into a variety of downstream systems or applications;
  • AI models that were trained, namely with a large amount of data using self-supervision at scale.

This way, the AI Act targets AI models that can handle multiple tasks, from language processing to image recognition that are flexible enough to be used and significantly impact various sectors.  

If your business uses such models or integrates them in other systems, you must be proactive in evaluating their impact across their foreseeable applications. This involves conducting impact assessments and ensuring the model’s adaptability does not compromise, e.g. user privacy or security, as we will explore below. 

1.2 Roles under the AI Act

If the software used is the first key concept in the AI Act, the other is the role taken by the companies using or making that software available. This way, providers, deployers, importers, and distributors are all key players under the AI Act. Each role carries specific responsibilities to ensure AI systems are safe and compliant before reaching the market.  

This means that, after concluding that you are facing an AI system or GPAI, and in order to determine how the AI Act concretely influences your operations, you need to be aware of the role you fit in: 

  • Providers develop and offer the software in the EU market. They meet these criteria: 
    • They develop an AI system/GPAI (or have it developed); 
    • They place it on the EU market, put it in service under their own name/trademark, or the output produced by the AI system is used in the EU; 
    • They can be a natural or legal person, public authority, agency, etc; 
    • They offer the system/model either for payment or free of charge. 
  • Deployers use AI systems and/or GPAIs professionally and must fulfil these specific requirements: 
    • Uses an AI system/GPAI under their own authority, or the output produced by the AI system is used in the EU; 
    • Is not using the system for non-professional activities (e.g. privat use); 
    • Is a natural or legal person, public authority, agency, etc. 
  • Importers placing third party AI systems on the EU market must meet the following criteria
    • The entity is located or established in the EU; 
    • The entity is a natural or legal person, public authority, agency, etc. 
    • The entity places an AI system in the EU market; 
    • The AI system bears the name/trademark of a person or company established in a third-country to the EU. 
  • Distributors make AI systems available throughout the EU market and are not already covered as importers or providers. They must comply with these guidelines. 
    • Make the AI system available on the EU market; 
    • Are not responsible for deployment (are not the provider) nor for importing the system (importer); 
    • Is a natural or legal person, public authority, agency, etc. 
“After concluding that you are facing an AI system or GPAI, and in order to determine how the AI Act concretely influences your operations, you need to be aware of the role you fit in, from provider, deployer, importer and distributor”

2 Understanding the key requirements of the EU AI Act

Having determined that the AI Act is applicable to you, you need to understand how. The AI Act defines different obligations, going from a more general level (AI literacy) to AI system-specific obligations and then to GPAI requirements. 

2.1 AI Literacy 

Starting at the more general level, AI literacy is a first obligation set out by the AI Act, recognizing the importance of awareness and understanding among those who deploy and interact with AI systems. Companies to whom the Act is applicable, are required to ensure their staff and any third-party users are adequately informed about the AI systems they’re using.  

This involves clear communication on how the AI works, its limitations, and its intended use cases. Practical steps include developing training programs and providing accessible resources that demystify AI technologies.  

For instance, a company deploying an AI system for credit scoring should educate its staff on how the system assesses creditworthiness and the factors it considers. 

2.2 Categorization of AI systems and risk levels 

AI systems are specifically targeted and categorized under the EU AI Act based on the risks they present. This classification is critical because it directly influences how companies should manage and deploy AI technologies.  

The EU AI Act identifies four main risk categories for AI systems, each with specific regulatory requirements. 

2.2.1 Minimal risk AI systems 

Minimal risk AI systems offer the most freedom but still require adherence to existing laws. This category implies a low level of concern from an AI Act perspective, including systems like e-mail spam filters that can be deployed without additional burdens. However, companies still need to ensure these systems do not breach other applicable regulations, such as data protection laws. Regular data protection compliance checks and updates to privacy policies might be necessary to stay aligned with legal standards. 

These can relate, e.g. to automated customer service bots, AI-driven email sorting, and content recommendation algorithms for non-sensitive content. These systems are widespread across various sectors, including retail, hospitality, and general corporate communications. They offer convenience and efficiency without posing significant risks to individual rights or safety. For that reason, the regulatory burden imposed on them by the AI Act is limited (e.g. AI literacy obligations). 

2.2.2 AI systems subject to specific obligations 

For AI systems that pose specific transparency risks, ensuring clarity towards individuals exposed to them is critical. Such systems can influence decisions or behaviors of individuals, making it crucial to inform users they’re interacting with AI. For instance, if your company employs a chatbot for client interactions, it should be explicitly labelled as AI-driven. This involves updating user interfaces and clear communication strategies. Transparency not only builds trust but also aligns with legal requirements, safeguarding against potential manipulations. By embracing transparency, companies reinforce their commitment to ethical AI use. 

This category includes AI-driven content creation tools (like those generating news articles or creating artwork), and virtual assistants. These applications are found in sectors such as marketing, customer service, media, and entertainment. The key requirement here is for businesses to clearly disclose the use of AI to users, ensuring that people know when they are interacting with a machine rather than a human. 

2.2.3 High-risk AI systems 

High-risk AI systems demand comprehensive measures due to their potential impact on fundamental rights and safety. This category includes AI used in critical infrastructure, employment, and other sensitive areas. Compliance involves detailed risk assessments, documentation, and adherence to strict regulatory standards. For example, an AI system used for screening job applications must be transparent, fair, and must not discriminate applicants. Companies must establish rigorous testing protocols and maintain detailed records of their AI systems’ development, deployment, and effects. In essence, dealing with high-risk AI requires a proactive approach to ensure that innovation does not come at the expense of ethical considerations or safety. 

The high-risk systems are also going to exist in multiple sectors, even if the AI Act gives a particular focus to sectors such as utilities, HR and recruitment, education, finance, and insurance. They can include, for example AI applications in critical infrastructure monitoring (such as electricity grid management systems), recruitment software analyzing job applications, AI tools used in educational settings to monitor exams or tailor learning paths, and AI systems assessing creditworthiness or determining insurance premiums. 

2.2.4 Prohibited AI systems 

Lastly, the AI Act sets non-negotiable boundaries with a set of prohibited AI systems. These include applications considered too harmful, such as some systems of real-time biometric surveillance in public spaces. Companies must review their AI applications to ensure none violate these prohibitions. Eliminating or modifying any such systems is not just about compliance; it’s about aligning with societal values and protecting fundamental rights. The message here is clear: innovation must respect ethical boundaries. 

The AI Act mostly targets core sectors that may be prone to extensive surveillance, manipulation, or invasive profiling, such as in law enforcement, marketing, and public administration. Here, we can include social scoring systems used by public or private entities, AI applications exploiting individuals’ vulnerabilities or employing subliminal techniques to manipulate decisions, real-time remote biometric identification systems in publicly accessible spaces (with certain exceptions), systems that categorize individuals based on biometric data to infer sensitive information, and systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. 

2.3 Detailed overview of the regulation of high-risk AI systems 

Within these categories, high-risk AI systems are the most heavily regulated, as they carry significant implications for individual rights and societal values. In this sense, AI systems qualify as high-risk if they, either: 

  • Are intended to be used as a safety component of a product, or are themselves the product, covered by specific Union harmonisation legislation (e.g. Directive 2009/48/EC on the safety of toys, Regulation (EU) 2017/745 on medical devices), and are required to undergo a third-party conformity assessment in order to be placed on the market or put into service pursuant to that legislation; 
  • Are intended to be used in specific sectors included in the AI Act (e.g. biometrics, critical infrastructure, education and vocational training, employment), unless they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. 

As we shall see below, providers are responsible for the initial conformity assessments, ensuring their AI systems meet the EU’s stringent standards before entering the market. Deployers, on the other hand, must ensure the AI systems are used in accordance with these standards, taking into account the operational context and user interactions. This includes implementing transparent user interfaces and providing clear information on the AI system’s capabilities and decision-making logic.  

2.3.1 Assessment of high-risk AI systems

Assessments are a particularly key element of the regulation of high-risk AI systems. Through a closer look at these processes, businesses can gain insights into not just fulfilling legal requirements but also embedding ethical and transparent practices into their AI solutions. 

Under the AI Act, high-risk AI systems are subject to different assessments that mostly providers need to comply with. These include: 

  • Third-party conformity assessments with a view of placing the AI system on the market or putting it into service pursuant to other EU sectoral legislation, such as Regulation (EC) No. 300/2008 on common rules in the field of civil aviation security and Regulation (EU) 2018/858 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units; 
  • Assessment done by the provider concluding that its system, initially considered by the AI Act as high-risk (Annex III to the Act), is actually not a high-risk system, before placing it on the market or putting it into service; 
  • Assessment of the availability, quality and suitability of the data sets that are needed to train the high-risk AI system; 
  • A strategy for regulatory compliance, including compliance with the conformity assessment procedures and procedures for the management of modifications to the high-risk AI system. 

However, there are two particular assessments that providers and deployers need to pay particular attention to before deploying high-risk AI systems: conformity assessments and fundamental rights impact assessments. 

2.3.2 Conformity assessments 

Conformity assessments are essential to ensure that high-risk AI systems meet the strict requirements of the AI Act. This step is crucial as it verifies the system’s adherence to the EU’s strict standards on data quality, transparency, and safety.  

The AI Act creates two procedures for the conformity assessment included in Annexes VI and VII. The first is based on internal control. Here, the provider has to verify: 

  • That the established quality management system is in compliance with the requirements of the AI Act (see article 17); 
  • That the information contained in the technical documentation is compliant with the relevant requirements (see Chapter III, Section 2); 
  • That the design and development process of the AI system and its post-market monitoring is consistent with the technical documentation. 

The procedure included in Annex VII is based on an assessment of the quality management system and of the technical documentation that need to be prepared before high-risk systems are made available or put into use. Under this procedure, both elements will need to fulfil the requirements enshrined therein so that they are, afterwards, assessed by a third-party (what the AI Act calls the “notified body”) that reviews whether or not they comply with the standards of the AI Act. 

It is also important to bear in mind that high-risk systems that have already been subject to a conformity assessment shall undergo a new one in the event of their substantial modification. 

2.3.3 Fundamental rights impact assessments 

In certain circumstances companies also need to address the impact of their AI systems on fundamental rights through another critical process. 

This is an assessment that deployers that are bodies governed by public law, or private entities providing public services, as well as the deployers of high-risk AI systems referred to in Annex III (points 5(b) and (c)) need to execute, prior to deploying the system. This assessment needs to include: 

  • a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;  
  • a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;  
  • the categories of natural persons and groups likely to be affected by its use in the specific context; 
  • the specific risks of harm likely to have an impact on the categories of persons or groups of persons identified; 
  • a description of the implementation of human oversight measures, according to the instructions for use; 
  • the measures to be taken where those risks materialise, including the arrangements for internal governance and complaint mechanisms. 

In other words, it requires a detailed examination of how the AI system will operate, the frequency of its use, and the individuals it will impact. For example, a facial recognition system used in public spaces by a public authority must assess its potential to infringe on privacy rights and outline measures to mitigate such risks. This includes documenting the technology’s purpose, the data it will process, and the safeguards against misuse. The process emphasizes the need for AI systems to operate within the ethical boundaries set by society, ensuring they do not compromise fundamental human rights. 

2.4 General purpose AI models 

When it comes to GPAIs, the AI Act makes a distinction based on their systemic risk.  

Firstly, it sets an initial set of obligations applicable to providers of GPAIs in general. Providers aiming to utilize these models must provide essential information to ensure safety and compliance with the EU AI Act. In particular, it mandates providers to disclose critical information to those building AI systems on top of these models, fostering transparency and understanding. Moreover, model providers are required to implement policies respecting copyright law during the model training phase. 

On top of this basic set of obligations, the AI Act further regulates GPAIs with systemic risk, meaning the risk inherent to the high-impact capabilities of general-purpose AI models that significantly affect the EU market, public health, safety, security, fundamental rights, or society at large, with potential for large-scale propagation. 

These models are identified based on two main criteria: high impact capabilities assessed through technical tools and methodologies, or a decision by the Commission. Additionally, a model is presumed to have high impact capabilities if the computation used for its training exceeds 10^25 FLOPs, indicating a substantial potential for systemic risk. 

If you’re not sure whether the AI Act applies to you, check out our AI Act applicability checklist or book a free call with one of our experts to get started on your AI compliance journey!

3 How to comply with the EU AI Act

3.1 Steps for ensuring compliance with the Act’s regulations on high-risk AI systems 

In this section, we dive into the specifics of what it takes to comply with the EU AI Act for companies involved in the supply chain of high-risk AI systems. You’ll learn about the critical responsibilities providers, deployers, importers and distributors must embrace to align their operations with EU regulations. 

3.1.1 AI Act obligations for providers 

Ensuring compliance with the AI Act 

Providers are responsible for ensuring their AI systems comply with the AI Act before deploying it. This includes conducting detailed risk assessments, implementing risk mitigation measures, and maintaining comprehensive documentation to demonstrate adherence to the Act. By fulfilling these obligations, providers can guarantee their AI systems meet the EU’s high standards for safety, transparency, and accountability. 

Adopting quality management systems 

For providers of high-risk AI systems, it is essential to adopt robust quality management systems. These systems provide continuous oversight and improvement, encompassing procedures for the development, deployment, and maintenance of AI systems. Regular performance reviews and updates are crucial parts of these systems, helping to prevent potential issues and ensuring ongoing compliance with the AI Act. 

Documentation and log-keeping 

The AI Act requires providers to maintain detailed documentation and logs. This covers all phases from the design and development to deployment and post-market monitoring of the AI system. The documentation must be clear and accessible, providing insights into the AI system’s operations and its compliance with legal standards. Logs, on the other hand, are also vital for maintaining transparency and accountability, serving as important records in case of incidents or disputes. 

Conformity assessments and CE marking 

Conformity assessments are critical for determining if an AI system meets the AI Act’s specific requirements. Successful assessments allow providers to display the CE marking on their products, indicating compliance with EU standards. This not only facilitates market access but also boosts the product’s credibility among users. 

3.1.2 AI Act obligations for deployers 

Aligning AI system use with legal and ethical standards 

Deployers are tasked with ensuring their use of high-risk AI systems adheres to provided instructions and guidelines. This ensures both the safeguarding of user rights and public safety. Critical to this effort is the implementation of appropriate technical and organizational measures that keep the AI system’s usage within its intended legal and ethical boundaries. For example, regular performance reviews and having a dedicated team to monitor compliance are vital for ensuring the system’s efficiency, reliability, and adherence to regulations. 

Ensuring human oversight 

It is fundamental for deployers to assign competent human oversight to the AI operations, particularly in critical scenarios. This oversight ensures that decisions made by AI are always subject to human review. The individuals chosen should be well-trained and authoritative, with ongoing support and updates on the latest AI developments and ethical considerations. This approach guarantees responsible use of AI and alignment with human values. 

Maintaining quality and diversity of input data 

Deployers must also manage the quality and representativeness of the input data for AI systems. Regular checks on the data’s accuracy and diversity are essential for producing reliable and ethical AI outcomes. This helps prevent risks associated with biased or inaccurate AI predictions. 

Continuous monitoring and reporting 

Continuous monitoring of the AI system’s operation is crucial. Deployers should rigorously follow the usage instructions provided by AI system providers and report any deviations or issues promptly to either the provider or the appropriate authority. This includes suspending the AI system’s operation if it poses any potential risk, such as exhibiting unusual behavior that could compromise safety or privacy. 

In summary, deployers play a pivotal role in the safe and responsible operation of high-risk AI systems. By enforcing technical and organizational measures, ensuring human oversight, maintaining data integrity, and conducting diligent monitoring, deployers can effectively meet their obligations under the EU AI Act. 

3.1.3 AI Act obligations for importers 

Importers of high-risk AI systems have specific responsibilities under the EU’s AI Act, which are crucial for ensuring compliance and safety in the market. Below, these responsibilities are detailed for clarity and ease of understanding: 

Verifying conformity and documentation 

  • Conformity assessment: Ensure the AI system has passed the required conformity checks;
  • Documentation standards: Verify that all documentation meets the AI Act’s strict guidelines; 
  • CE marking and EU declaration: Confirm that the AI system is appropriately marked with the CE symbol and is accompanied by the EU declaration of conformity. 

Handling non-compliance and risks 

  • Identifying issues: If there is reason to suspect non-compliance or encounter falsified documentation, refrain from placing the AI system on the market;
  • Reporting: Promptly inform the provider, authorized representatives, and relevant market surveillance authorities about any risks or non-compliance;
  • Impact: This proactive approach not only mitigates risks but also emphasizes the importance of a compliant AI ecosystem. 

Ensuring accessibility of contact information 

  • Visibility: Make sure your contact information is clearly accessible on the AI system’s packaging or in the accompanying documentation;
  • Implementation example: Include a label on the product’s packaging that details your company’s contact information;
  • Purpose: Enhances transparency and builds trust with consumers and authorities, fulfilling a key requirement of the AI Act. 

These steps are designed to ensure that importers actively contribute to the safety and regulatory compliance of AI systems entering the EU market, establishing a trusted environment for all stakeholders. 

3.1.4 Obligations for distributors 

Verifying compliance before market distribution 

Distributors share a critical responsibility, akin to importers, to ensure that only compliant high-risk AI systems are introduced to the market. This involves verifying that each AI system carries the necessary CE marking and is accompanied by the EU declaration of conformity and instructions for use. This step is essential not only for protecting end-users but also for shielding the distributor’s business from legal and reputational risks. 

Responding to non-compliance and risks 

Should a distributor discover that a high-risk AI system may not comply with the AI Act or poses any risk, they are obligated to stop distribution immediately. This action is crucial to prevent potential harm or non-compliance from reaching the market. For example, if a facial recognition system is found lacking proper documentation or is flagged for inaccuracies leading to biased outcomes, distribution must be halted until these issues are rectified. Distributors must then notify the provider or importer, and possibly the competent authorities, about the identified non-compliance or risk, demonstrating a proactive commitment to market safety and regulatory compliance. 

Maintaining conditions for compliance during storage and transport 

Distributors are also responsible for ensuring that the AI systems are stored and transported under conditions that do not compromise their compliance. This includes managing environments to protect the system’s integrity, such as maintaining temperature controls for sensitive components or ensuring secure transport to prevent unauthorized access. These measures are vital to ensure that the AI system remains in a compliant state from receipt to distribution. 


Like our legal insights? Get more directly to your inbox by subscribing to our newsletter.

3.2 Steps for ensuring compliance with the Act’s regulations on GPAIs 

3.2.1 General obligations on GPAIs 

Under the EU’s AI Act, companies developing or deploying General Purpose Artificial Intelligence Systems (GPAIs) are subject to stringent requirements. To enhance readability and comprehension, this section is organized into clear segments focusing on various obligations: 

Maintaining technical documentation 

  • Purpose: Serves as the foundation for regulatory compliance. 
  • Contents
    • Design details: Architecture and functionalities of the AI model. 
    • Data sources: Origins and types of data used in training. 
    • Development process: Steps followed from conception to final testing. 
    • Performance metrics: Evaluation of accuracy, handling of edge cases, etc. 
  • Benefits:
    • Transparency: Offers clear insights into operational capabilities and limitations. 
    • Accountability: Builds trust with users and facilitates regulatory reviews. 

Information provision to AI system integrators 

  • Objective: Enable effective and responsible integration of your model into broader AI systems. 
  • Key information
    • Usage guidelines: Instructions on the responsible use of the model. 
    • Model strengths and constraints: Detailed descriptions to prevent misuse. 
    • Case studies: Examples of successful implementations to guide integrators. 
  • Impact
    • Enhances the reliability and effectiveness of AI models in diverse applications. 

Adherence to union copyright law 

  • Requirement: Ensure respect for intellectual property rights during model training. 
  • Actions
    • Policy implementation: Identify and manage copyrighted material. 
    • Technology use: Employ advanced technologies to monitor and prevent infringement. 
  • Outcome
    • Protects your company legally and supports ethical AI development practices. 

Publishing training content summary 

  • Transparency requirement: A summary prepared according to the AI Office’s template. 
  • Details included
    • Diversity and sources: Overview of the data variety and origins. 
    • Selection criteria: Methodologies used to select training data. 
  • Purpose
    • Assists in assessing potential biases and limitations. 
    • Enhances stakeholder confidence in the model’s fairness and objectivity. 

Each of these obligations contributes to ensuring that GPAIs developed or deployed are compliant, responsible, and trustworthy, aligning with both legal requirements and ethical standards. 

3.2.2 Additional obligations on GPAIs with systemic risks 

Providers of General Purpose Artificial Intelligence Systems (GPAIs) deemed to have systemic risks face heightened responsibilities. This section outlines the specific steps needed to effectively navigate these obligations: 

Conducting state-of-the-art evaluations 

  • Objective: Ensure robustness and operational safety of AI models. 
  • Actions
    • Adversarial testing: Engage in comprehensive testing to simulate threats and identify vulnerabilities. 
    • Use of latest protocols and tools: Employ cutting-edge methodologies for thorough evaluations. 
    • Documentation: Maintain detailed records of evaluation processes and outcomes. 
  • Purpose: Mitigate potential threats and enhance model reliability. 

Comprehensive risk management 

  • Risk identification: Enumerate all potential risks, including privacy breaches and broader societal impacts. 
  • Mitigation strategies
    • Algorithm adjustments: Modify algorithms to reduce risks. 
    • Data privacy enhancements: Strengthen data handling procedures. 
  • Regular reviews: Update risk assessments and mitigation measures continually. 
  • Outcome: Ensure the AI model contributes positively within the EU. 

Incident documentation and reporting 

  • Requirement: Fast and transparent reporting of any incidents to the AI Office and national authorities. 
  • Procedure
    • Incident logs: Document the nature of incidents, resolutions, and preventive measures. 
    • Communication protocol: Establish and follow a standardized process for reporting incidents. 
  • Impact: Builds trust with regulatory bodies and the public by demonstrating a commitment to responsible AI management. 

Enhancing cybersecurity measures 

  • Necessity: Protect AI models and infrastructure from cyber threats due to significant systemic risks. 
  • Actions
    • Regular cybersecurity assessments: Conduct assessments to identify and address vulnerabilities. 
    • Implementation of robust protections: Apply strong security measures based on assessment findings. 
  • Benefit: A strong cybersecurity posture not only safeguards the model and data but also enhances the provider’s reputation as a trustworthy entity. 

These steps are critical for providers handling GPAIs with systemic risks, ensuring that their operations are not only compliant but also secure and responsible in the broader societal context. 

3.3 Steps for ensuring compliance with the Act’s regulations on AI systems subject to specific obligations 

The EU AI Act sets out clear responsibilities for providers and deployers of certain AI systems. These obligations focus on ensuring safety and transparency, particularly in situations where AI systems may pose unique risks.

3.3.1 For providers

  1. Interactions with natural persons:
    • Requirement: Providers must inform individuals when they interact with an AI system. This is required unless it’s obvious from the context, or the system is being used lawfully for law enforcement with appropriate safeguards.
    • Example: If an AI is used for customer support, it should be clear to users that they are talking to an AI, unless the system is part of a law enforcement operation.
  2. AI systems generating synthetic content:
    • Requirement: Providers must label the output of AI systems (such as synthetic audio, images, video, or text) to indicate that it has been artificially generated or manipulated.
    • Exceptions: This rule does not apply if the AI is only assisting with basic editorial tasks or is being used under legal authorization for criminal investigations.

3.3.2 For deployers

  1. Emotion recognition and biometric systems:
    • Requirement: Deployers must inform people when they are subject to emotion recognition or biometric categorization systems.
    • Compliance with data protection laws: It’s important to comply with GDPR and other data protection laws.
    • Exceptions: No need to inform if systems are being used lawfully for criminal detection or investigation.
  2. AI Systems creating or altering content:
    • Requirement: Deployers must disclose when content (images, audio, video) is artificially created or altered, especially if it is intended to inform the public about important issues.
    • Editorial oversight exception: Disclosure isn’t required if AI-generated content is reviewed or controlled by a person responsible for its publication.

4 Implications of non-compliance and potential fines and penalties

Failing to comply with EU AI Act can result in severe financial penalties. Here’s what you need to know: 

  • Prohibited practices and data non-compliance: Fines can be as high as €35 million or 7% of total annual global turnover, whichever is higher. For example, deploying an AI system without proper data governance or failing to conduct the necessary impact assessments can result in these hefty fines. 
  • Other non-compliance penalties: Failure to comply with the rules for general-purpose AI models could result in fines of up to €15 million, or 3% of annual turnover. Providing misleading information to regulators could result in fines of up to €7.5 million or 1.5% of annual turnover. 

These thresholds highlight the financial risks of non-compliance. Beyond the immediate financial impact, non-compliance can cause long-term reputational damage and erode consumer confidence, which can be even more damaging to businesses. 


5 Key implementation dates for the EU AI Act

When the AI Act is published on the EU’s Official Journal, it will not come immediately into effect. Indeed, from that moment, the following timeframe will apply: 

  • The AI Act will come into force on the 20th day after its publication (probably around end of May 2024); 
  • The rules concerning prohibited AI systems will come into force 6 months after the publication; 
  • The rules on GPAIs and conformity assessment bodies will come into force 12 months after the publication; 
  • All other provisions (except on high- risk AI systems per Annex II) will come into force 24 months after the publication; 
  • The high-risk AI systems per Annex II will come into force 36 months after the publication. 

This means that you will have time to adopt the necessary steps to comply with the AI Act. However, be mindful that even if the timelines seem far away, it takes time to appropriately tackle all the elements of the AI Act – especially those concerning high-risk AI systems and GPAIs, so the earlier your compliance journey starts, the better prepared you will be for the future. 


Conclusion

Navigating the complexities of the EU AI Act is crucial for any organization involved in the development, deployment, or distribution of AI systems within the EU. This guide has outlined several key aspects of the Act, emphasizing the importance of compliance for ethical and legal AI deployment. Below is a summary of the key points discussed: 

  1. General obligations: All AI systems must meet basic requirements for safety, transparency and ethical operation. 
  2. High-risk AI systems: These systems require rigorous compliance measures, including comprehensive risk assessments, detailed documentation, and strict adherence to specific regulatory standards to protect fundamental rights and public safety. 
  3. Prohibited AI practices: The law clearly prohibits certain AI applications to prevent practices that could significantly infringe on privacy and fundamental human rights. 
  4. Roles and responsibilities: It outlines specific obligations for vendors, implementers, importers and distributors to ensure that AI systems are safe and compliant before they are placed on the market. 
  5. Compliance steps for GPAIs: Providers of general-purpose AI models are subject to specific obligations to effectively manage systemic risks. 

As the EU continues to refine and implement this comprehensive regulatory framework, it is imperative that companies and developers proactively align their operations with these standards. Not only will compliance mitigate the risk of significant fines, but it will also increase the trust and reliability of AI products offered to consumers. Staying ahead of regulatory changes will position organisations for success in a rapidly evolving digital landscape. Prioritizing compliance is not just a legal obligation, but a commitment to the ethical use of AI that respects user rights and societal norms. 

Related

Let’s Go!

Book a free, non-binding discovery call to discuss how we can help you achieve your business goals.

Or feel free to reach us directly via email at [email protected].

Book a free call