Contact Us
Search Icon

Suggested region and language based on your location

    Your current region and language

    First-in-kind global guidance to support responsible AI management published

    • New international standard designed to empower organizations to safely manage AI
    • Safeguards help build trust so organizations and society can benefit from opportunities from AI
    • Publication takes step to close AI confidence gap, after 61% called for global guidelines

    16 January 2024: A first-in-kind AI management system designed to enable the safe, secure and responsible use of Artificial Intelligence (AI) across society has been launched by BSI, following research showing 61% wanted global guidelines for the technology.

    The international standard (BS ISO/IEC 42001) is intended to assist organizations in responsibly using AI, addressing considerations like non-transparent automatic decision-making, the utilization of machine learning instead of human-coded logic for system design, and continuous learning.

    The guidance, published in the UK by BSI as the UK’s National Standards Body, sets out how to establish, implement, maintain and continually improve an AI management system, with a focus on safeguards. It is an impact-based framework that provides requirements to facilitate context-based AI risk assessments, with details on risk treatments and controls for internal and external AI products and services. It aims to help organizations introduce a quality-centric culture and responsibly play their part in the design, development and provision of AI-enabled products and services that can benefit them and society as a whole. The publication is prominently referenced in the UK Government’s National AI Strategy as a step towards guardrails that ensure AI’s safe, ethical and responsible use.

    Susan Taylor Martin, CEO, BSI said: “AI is a transformational technology. For it to be a powerful force for good, trust is critical. The publication of the first international AI management system standard is an important step in empowering organizations to responsibly manage the technology, which in turn offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”

    BSI’s recent Trust in AI Poll of 10,000 adults across nine countries found three-fifths globally and the same proportion in the UK (62%) wanted international guidelines to enable the safe use of AI. Nearly two-fifths globally (38%) already use AI every day at work, while more than two-thirds (62%) expect their industries to do so by 2030. The research found that closing the ‘AI confidence gap’ and building trust in the technology is key to powering its benefits for society and the planet.

    Scott Steedman, Director General, Standards, BSI said: “AI technologies are being widely used by organizations in the UK despite the lack of an established regulatory framework. While governments consider how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them. In this fast-moving space, BSI is pleased to announce the publication of the latest international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services.

    “Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI. Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy. The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”

    BSI represented UK interests in the development of the standard as a participating member in ISO/IEC JTC 1/SC 42.

    Download the standard here, or discover more about our work to shape trust in AI here.