Suggested region and language based on your location

    Your current region and language

    A mother working from home while her children are attending online school
    • Blog
      ICT

    The EU AI Acts Interactions with Cybersecurity Legislation

    The EU AI Act officially went into force on August 1, 2024, marking a significant milestone following a lengthy legislative process to deliver the world’s first comprehensive artificial intelligence regulation.

    While the European Union is leading the way in establishing policies to manage risks associated with AI, the AI Act, much like the GDPR, has implications that extend beyond Europe. The scope of the Act applies to any provider, developer, or deployer of AI systems if they are made available in the EU market. It is likely that the enactment of this regulation will soon be followed by similar initiatives in other countries and regions, with some regulations already being developed globally.

    AI systems present a range of new risks, such as those related to fairness, transparency, and explainability. Some risks posed by AI systems are more familiar - just like any other IT system or software, AI systems can be vulnerable to cyber threats and security breaches and with the rapid adoption of AI systems, cybersecurity concerns are naturally rising.

    The EU AI Act outlines fundamental cybersecurity considerations and requirements. However, the Act does not provide exhaustive coverage of cybersecurity measures, prompting interactions with other legislation and frameworks to fill these gaps. This article explores these intersections, and the broader implications for stakeholders.

    Cybersecurity within the EU AI Act

    The EU AI Act proposes a risk-based approach, categorizing AI systems based on their risk levels—ranging from minimal to unacceptable risk. The framework imposes strict requirements on high-risk AI systems, which include those used in critical infrastructure, healthcare, education, employment, law enforcement, and more. The Act’s primary goals are to ensure AI systems are safe, respect existing laws, and adhere to EU values. With those goals in mind, it is no surprise that cybersecurity is a fundamental component of the EU AI Act, as the security of AI systems is paramount to their reliability and trustworthiness.

    Key cybersecurity requirements in the EU AI Act are primarily embedded within the risk management obligations for high-risk AI systems (Chapter 3) and include:

    1. Robustness and Accuracy: According to Article 15 of the AI Act high-risk AI systems should ‘meet an appropriate level of accuracy, robustness and cybersecurity’. This provision includes requirements for resiliency against unauthorized tempering, and measures to protect AI systems from adversarial attacks that could compromise their functionality, such as data poisoning or model manipulation.

    2. Data Governance: Article 10 focuses on data, and mandates that high-risk AI systems ensure the integrity of data used, and confidentiality of any personal data they process. This is further described in Recital 69 that emphasizes “data protection by design and by default” principles and states ‘right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system”.

    3. Incident Reporting: Article 15 outlines the need for organizations to be able to detect, respond and resolve attacks on AI systems. The AI Act also imposes incident reporting obligations on both providers and deployers of AI systems models (article 26, 73), including in cases of testing in real world conditions (Article 60). The act mandates prompt notification of incidents or malfunctioning of high-risk AI systems that could impact user health and safety or lead to significant impact to critical infrastructure, property or environment. (Recital 155)

    4. Technical Documentation: Article 11 and Annex IV of the AI Act outline technical documentation that high-risk AI systems must maintain, including details on security measures.

    Additional Considerations and requirements are outlined for the General-Purpose AI Models with Systemic Risk (Article 55, Recitals 114-115). These large-scale, adaptable AI models that could pose significant societal risks are subject to additional cybersecurity considerations, including:

    • Continuous risk assessment and mitigation

    • Adversarial testing for vulnerability identification

    • Adequate level of cybersecurity protection for the system and physical infrastructure

    • Incident monitoring and timely reporting

    These provisions collectively emphasize the importance of integrating cybersecurity considerations throughout the lifecycle of AI systems, from design and development to deployment and ongoing operation.

    While the Act emphasizes the importance of cybersecurity, it does not provide a detailed framework for implementation of AI systems security. Instead, it refers to the use of established best practices and standards. This approach underscores the need for AI providers and users to look beyond the AI Act for comprehensive cybersecurity requirements, leveraging existing cybersecurity legislation and frameworks to ensure adequate protection.

    Interactions with other Legislation on Cybersecurity

    The EU AI Act interacts with several existing cybersecurity legislations, notably the Network and Information Systems Directive (NIS2 Directive), the Cybersecurity Act, the Cyber Resilience Act, and the Digital Operational Resilience Act (DORA). Understanding these interactions is vital for organizations to ensure compliance and robust security postures.

    1. NIS2 Directive:

    Adopted in response to the growing importance of network and information systems, this directive is designed to enhance cybersecurity across the EU. It applies to operators of essential services and digital infrastructure, setting out requirements for risk management, incident reporting and supply chain security. AI developers and deployers may need to align their cybersecurity practices with the NIS2 Directive, particularly when their systems qualify as high-risk or when they operate in critical sectors covered by NIS2 like healthcare, energy, transport, and finance.

    2. Cybersecurity Act:

    This regulation establishes an EU-wide cybersecurity certification framework for ICT products, services, and processes. AI systems, particularly those classified as high-risk under the AI Act, may benefit from certification under schemes developed through the Cybersecurity Act. Importantly, Article 42 of the AI Act states that if an AI system obtains a relevant cybersecurity certification under the Cybersecurity Act, it may be presumed to conform with certain cybersecurity requirements of the AI Act. This alignment provides a streamlined path for AI providers to demonstrate compliance and reduces regulatory burden.

    3. Cyber Resilience Act (CRA):

    This regulation aims to set cybersecurity requirements for products with digital elements (PDE). As many AI systems rely on or are integrated into such products, there will likely be significant overlap between the Cyber Resilience Act and the AI Act's cybersecurity provisions. CRA has been approved in March 2024 and is yet to be formally adopted by the Council. Like the Cybersecurity Act, fulfillment of requirements under the Cyber Resilience Act as evidence of conformity with relevant cybersecurity requirements in the AI Act.

    4. Digital Operational Resilience Act (DORA):

    Focused on the financial sector, DORA establishes uniform requirements for the security of network and information systems of financial entities. AI systems used in the financial sector may need to comply with both DORA and the AI Act.

    5. US Executive Order on AI:

    US Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence signed in 2023 demonstrates a global trend towards regulating AI with a focus on security and safety. While not EU legislation, it may influence international standards and practices that could impact EU-based AI developers and users. Key aspects include AI safety standards, cybersecurity measures, privacy protection, and international cooperation.

    The interaction between these various pieces of legislation highlights the complex regulatory landscape surrounding AI and cybersecurity. Organizations developing or deploying AI systems will need to navigate these interconnected requirements to ensure full compliance.

    Additional Frameworks for AI Systems Risk Management and Security

    In addition to formal legislation, several frameworks and standards have emerged to guide organizations in managing risks associated with AI systems and ensuring their security:

    1. ISO Standards:

    One of the key frameworks that can be referenced is the ISO/IEC 42001, which outlines requirements for establishing, implementing, maintaining, and continually improving an information security management system (ISMS) specifically tailored for AI. This standard will be highly relevant for organizations looking to start aligning their practices with the requirements of the EU AI Act. In addition, ISO/IEC 23894 provides guidance on AI risk management. ISO/IEC 27090, currently under development, will address security and privacy guidelines specifically tailored for AI environments, offering further guidance on managing AI-related cybersecurity risks.

    2. NIST AI Risk Management Framework (AI RMF):

    Developed by the US National Institute of Standards and Technology, this framework provides a structured approach to managing risks associated with AI systems throughout their lifecycle. NIST has published crosswalks with other standards, including ISO, OECD recommendations on AI and draft version of the EU AI Act [3]. In July 2024 additional guidance and tools have been released: ‘AI RMF Generative AI Profile (NIST AI 600-1), ‘Secure Software Development Practices for Generative AI and Dual-Use Foundation Models’, and open-source tool for AI testing against adversarial attacks.

    3. ENISA AI Cybersecurity Resources:

    The European Union Agency for Cybersecurity (ENISA) has published various resources on AI cybersecurity, including the Framework for AI Cybersecurity Practices (FAICP) introduced in 2023. The FAICP aligns with the EU AI Act and offers practical guidance on enhancing AI system security throughout the lifecycle. It covers key areas such as risk management, security controls, incident response, and compliance. By following ENISA's guidance, organizations can improve their AI cybersecurity posture and better position themselves to meet the requirements of the AI Act.

    These frameworks and standards can serve as valuable resources for organizations seeking to implement robust risk management and security practices for their AI systems, going beyond mere compliance with legislative requirements.

    Conclusion

    The EU AI Act represents a significant step towards creating a comprehensive regulatory framework for AI systems, with cybersecurity playing a crucial role in its provisions. By addressing cybersecurity concerns and interacting with existing cybersecurity legislation, the Act aims to ensure that AI systems deployed within the EU are not only innovative but also secure and trustworthy.

    While the AI Act outlines fundamental cybersecurity requirements, it interacts with other cybersecurity legislation, such as the NIS2 Directive and the Cybersecurity Act, to create a comprehensive regulatory framework. Furthermore, the rapidly evolving nature of both AI technology and cyber threats means that legislation alone may not be sufficient to address all security concerns. Organizations should look beyond mere compliance and consider adopting comprehensive risk management frameworks and industry best practices to ensure the security and resilience of their AI systems.

    Next Steps: Preparing for Compliance

    As the EU AI Act approaches implementation, organizations should take proactive steps to ensure compliance. To support these efforts, BSI offers a range of services. These services include auditing against ISO 42001 requirements for certification, as well as voluntary, independent assessments of AI algorithms, models, and datasets overseen by BSI's experienced technical and regulatory AI experts, ensuring that AI performance metrics are accurately calculated in accordance with the standards related to bias, robustness, and performance of the AI models.

    BSI also provides a range of training and knowledge materials to help clients stay informed on the evolving compliance landscape. BSI recently published a whitepaper on the EU AI Act which distills the essential takeaways and provides a comprehensible summary of the legislation. This can be downloaded here

    By leveraging BSIs expertise, organizations can work towards EU AI Act compliance while simultaneously enhancing their overall AI governance and cybersecurity posture. This proactive approach not only helps meet regulatory requirements but also builds trust in AI systems and strengthens an organization's competitive position.