Suggested region and language based on your location

    Your current region and language

    Senior man sharing smart phone with male healthcare worker sitting in living room
    • Blog
      Healthcare

    Navigating the EU AI Act: A Healthcare Industry Perspective

    The recent emergence of digital workflows has enabled significant improvements over traditional methods in the healthcare industry. This has enabled the integration of AI tools which has further enhanced patient care's efficiency and promptness.

    One notable field is medical imaging, where specialized images taken from patients will serve as evidence for the presence of severe diseases like cancer. With the growing population and shortage of medical expertise, the daunting task of manually going through myriads of samples can be inefficient and fatiguing for a doctor.

    AI enables rapid analysis of samples, highlighting areas of concern with unmatched accuracy and speed, reducing workload, and allowing doctors to focus on complex cases. This efficiency leads to faster and more reliable diagnoses, transforming patient care. In this article we will take AI in medical imaging as an example field and focus on how the EU AI Act would impact it.

    Understanding the EU AI Act

    The EU AI Act sets the framework to ensure these AI advancements are implemented safely and effectively. This classification mandates strict regulatory standards, ensuring that these AI tools are safe, reliable, and transparent.

    The EU AI Act classifies AI systems based on their risk levels: unacceptable, high-risk, limited risk, and minimal risk. Unacceptable risk AI systems, such as social scoring systems and manipulative AI are prohibited. High-risk AI systems, which constitute the bulk of the AI Act's focus, face regulations to ensure their safety and reliability. Limited risk AI systems have lighter transparency obligations, while minimal risk systems, including many current AI applications like AI-enabled video games, remain unregulated.

    For healthcare applications, the high-risk classification is particularly relevant. High-risk AI systems, as defined in the AI Act, include those that have a significant impact on people's safety, health, or fundamental rights. Given the critical impact of AI systems in diagnosis of patients, they are likely to be classified as high-risk under the EU AI Act.

    AI in Medical Imaging

    An AI-assisted medical imaging tool typically consists of an image viewer and an AI model that takes the digital image as input and outputs predictions on the image. In the preliminary stages of development, experts annotate the images which could include malignancy regions, cancerous cell positions and volume segmentations, among others.

    These annotations are then fed to the model during a training phase so that the model can make the same prediction as indicated by the experts. Once a model is shown to have satisfying performance on a set of data not previously used for training, it is then deployed for real-life use cases.

    For each new case, an expert views the image and leverages the AI predictions to decide the course of action for diagnosis. Let us review some of the requirements of the EU AI Act for an AI system like this and see how it will apply to the various stages of its development.

    Requirements for High-Risk AI Systems in Healthcare

    The requirements of the AI Act for providers of high-risk AI systems are extensive and crucial for ensuring the safety and efficacy of AI tools. These requirements are mentioned in Chapter III of the AI Act, Articles 9-15. Compliance with these requirements is itself a requirement.

    1. Risk Management System: Risk Management will help AI system's lifecycle to identify, assess, and mitigate potential risks. This will set an action plan in place for any eventual problems at the various stages of the AI system from development all the way to use in production.

    2. Data Governance: Data is the cornerstone of AI development. Establishing data governance involves ensuring that the training, validation, and testing datasets are relevant, representative, and free of errors. In healthcare, this is particularly important as the accuracy of AI systems directly impacts patient outcomes. For AI used in medical imaging, diverse and high-quality datasets from various demographics which are carefully annotated by experts are essential to train the AI to perform accurately across all patient groups.

    3. Technical Documentation: AI systems are intricate by nature and are built by a multidisciplinary workforce. Having proper technical documentation will make it easy for anyone that is unfamiliar with a certain part of the system to understand it better. It will also help with proper versioning and keeping track of changes and outlines the methods used and presenting validation results to ensure transparency and accountability.

    4. Record-Keeping and Traceability: The AI development environment is fast-paced, and an AI model will typically go over many different versions before it is ready for use in production. Record-keeping and traceability features are vital for transparency and accountability. This means the AI should automatically log all relevant events and modifications throughout its lifecycle. For instance, implementing a logging system that records every update ensures that any changes can be reviewed, maintaining a clear and auditable history so that when an issue arrives, it is easy to point out its origins.

    5. Instructions for Use: Providing clear instructions for downstream users is crucial to ensure that not only can the system comply with regulatory requirements but also offer precise guidelines for experts on how to use the system effectively for AI powered image analysis.

    6. Human Oversight: Although AI-assisted tools are more efficient and cost-effective, it should be noted that AI models make predictions with a probability, and it is possible for them to make faulty decisions. Thus, human oversight is an indispensable aspect of AI system design, particularly in healthcare. AI systems should allow experts to intervene when necessary and make final diagnostic decisions. It is therefore essential that the experts review these findings to confirm or adjust the diagnosis, ensuring accuracy and maintaining the human element in patient care.

    7. Accuracy, Robustness, and Cybersecurity: Given the stakes of use of AI in the outcome of a patient's diagnosis, respecting these aspects are paramount in the AI development in healthcare. AI models need to be tested against different benchmarks to make sure they are sufficiently performant to be used in production. Furthermore, the AI systems need to be resilient against errors, inconsistencies, and any attempts by unauthorized parties to use the system.

    AI Compliance with BSI

    BSI is a key player in the regulatory industry, offering a range of services to support enterprises as the AI regulatory landscape evolves. Their recently published EU AI Act whitepaper, ‘what AI providers and deployers need to know’ distills the essential takeaways from the intricate legal document, providing a comprehensible summary for anyone interested in the subject. This can be downloaded here.

    BSI provides auditing services for companies committed to trustworthy AI development and deployment. These services include auditing against ISO 42001 requirements for certification, as well as voluntary, independent assessments of technical documentation, AI algorithms, models, and datasets overseen by BSI's experienced technical and regulatory AI experts, ensuring that AI performance metrics are accurately calculated in accordance with the standards related to bias, robustness and performance of the AI models.

    Conclusion

    Healthcare companies are obliged to integrate compliance strategies into their AI development processes. Implementing the recent ISO 42001 standard on AI Management Systems requirements will be one way to ensure that the requirements imposed by the AI Act will be met.

    Adopting such standards will create a rigorous and efficient risk management system, build trust in the parties involved and leverage organizational compliance with the AI Act as a catalyst for innovation and excellence and lead the way in the future of healthcare.