Shaping trust in AI: How regulation can help us unlock the true power of AI
In this essay, Mark Thirlwell, Managing Director - AI Regulatory Services, looks at the medical devices sector and how building trust, including through regulation, can ensure it AI can be a force for good for patients.
This year alone the AI medical devices sector is expected to grow to $15.42 billion, a compound annual growth rate of 45% [1]. And BSI’s Trust in AI Poll [2] found:
- 57% of people expect to use AI in their jobs daily by 2030
- On the patient side, 37% expect to see AI used when they are at the doctors or hospital by 2030
Manufacturers are already starting to embrace AI in medical devices[3], from surgical robotics to diagnostics. There are some remarkable outcomes already being achieved, for example with AI tools to detect breast cancer even before symptoms appear[4]. These are welcome developments that can help ensure AI is a force for good.
Trust is the lynchpin
BSI’s research revealed:
- 52% of people are excited about the potential for AI to improve diagnosis or speed up recovery
- 56% support the use of AI tools if it can improve their condition or speed up recovery
- 55% support the use of AI tools if there are strict safeguards to ensure the ethical use of patient data in place.
This is transformational technology, but understandably the public wants reassurance. 57% said the tools should be overseen by a qualified person. And three-quarters (74%) say they need a strong level of trust when AI is used for medical diagnosis or treatment (for example, analysing X-ray images and creating personalized treatment plans).
We have this fantastic opportunity to transform medical treatment and make people not just healthier but able to live better lives while they are being treated. If we don’t take steps to build trust in AI in the medical devices space, the risk is we will lose out. That means taking steps to guarantee that AI does exactly what it says it's going to do and nothing else, ensuring that it's trustworthy, ethical and potential bias is addressed. It means a human participates in the overall process.
Ultimately, it’s critical that we complement innovation with safe and ethical deployment. This is where regulation could play a crucial role.
The impact of the EU AI Act
Our survey found that 40% of people use AI in their jobs every day. With different industries considering how AI can be used to drive the performance and safety of products, regulation is likely to play an increasingly pivotal role.
The EU AI Act, due to be finalized imminently with a two-or-three-year roll-out, is designed to be clear, prescriptive and flexible. It will establish obligations for providers and users depending on the level of risk posed, while also requiring generative AI tools like ChatGPT to comply with transparency rules.
There is an opportunity for other nations around the globe to also embrace this approach. Such oversight is intended to encourage innovation by delivering clarity and confidence in AI development – which has the potential to increase the public's willingness to adopt the technology quickly, driving demand and encouraging innovation.
The medical angle
Where there's already legislation in place for medical devices, the EU AI Act will be bolted on to that and ensure that all elements of the medical device are safely regulated. Any organization wishing to market its medical device in Europe will need to conform to the Act by the end of the transition period, likely to be in 2026 or 2027. Having this in place could help boost Europe’s excitement for AI.
At BSI we have a strong track record in the safe adoption of medical devices and one of the largest medical device-notified bodies in Europe. The AI Act can work well with current medical device regulations[5] (MDR) to ensure that people can benefit from these technologies.
Time for a global approach
Whilst the AI Act is a significant stake in the ground, global agreement is the next step. Currently, with AI, the EU is taking a different approach to the US, China and even the UK[6].
Time will tell as to how the world responds to AI. Given the amount of innovation we are seeing in the medical devices sector, the potential for AI to change for the better how we diagnose, treat and care for people is evident. If we want to unlock the true power of AI, the sooner we embed trust, the sooner we can reap the benefits.
This content is from BSI’s Shaping Society 5.0 campaign. Download Mark’s full essay here or access others in the collection here.
Mark Thirlwell, Managing Director, AI Regulatory Services at BSI
Mark has over 18 years of experience working for Accenture, PwC and, most recently, The Berkeley Partnership. During this time Mark shaped and led the definition and delivery of transformational strategies across a range of industries. He is increasingly focused on how cutting-edge digital innovation can be used to address society's most pressing issues, including climate change. Mark joined BSI in an interim capacity in April 2022, before being appointed as the permanent Managing Director of the AI Regulatory Services team in March 2023.
[1] AI In Medical Devices Global Market Report 2023, Report Linker, July 2023
[2] BSI partnered with Censuswide to survey 10,144 adults across nine markets (Australia, China, France, Germany, India, Japan, Netherlands, UK, and US) between 23rd and 29th August 2023
[3] Medtech’s Generative AI Opportunity, BCG, May 2023
[4] This AI software can tell if you're at risk from cancer before symptoms appear, Wired, August 2023
[5] The Medical Devices Regulations 2002, Legislation.Gov.UK, accessed September 2023
[6] AI regulation around the world, Taylor Wessing, May 2023