Contact Us
Search Icon
Politicians talking in government building
  • Blog
    Digital Trust

Shaping Trust in AI - Understanding the ISO 42001 Standard

ISO 42001 aligns with the advancement of AI technologies, while assuring that companies can leverage these innovations responsibly and ethically.

Artificial intelligence (AI) is now an essential part of our daily lives. As AI continues to evolve and enter various aspects of our lives, it becomes imperative to address the need for regulation and data privacy, all while being transparent.

Companies with AI-enabled products, services, or management systems should prepare for upcoming regulations to nurture a more secure and beneficial environment. As AI progresses, it's evident that AI management must evolve as well.
According to Yahoo Finance, 62 percent of surveyed IT leaders increased their investment in emerging applications, and 82 percent say they are prepared to leverage generative AI. But how does this actually work? And what about issues?

Regulation and concerns

Issues surrounding AI regulation should be addressed seriously. Some concerns are around data and privacy, ethical issues, legal issues, and transparency. In 2020, the US passed the National Artificial Intelligence Initiative Act of 2020 (H.R. 6216), which launched an American AI initiative and gives guidance for AI research, development, and evaluation activities within federal science agencies.

There is also concern about misuse or inadvertent consequences that have led to efforts to develop standards. The US National Institute of Standards and Technology is holding workshops and discussions with public and private sectors to create federal standards for vigorous, dependable, and trustworthy AI systems.

In the 2023 legislative session, at least 25 states, Puerto Rico, and the District of Columbia introduced AI bills, with 18 states and Puerto Rico implementing resolutions or endorsing legislation.

President Biden issued an executive order to promote AI development while establishing guidelines for federal agencies to follow when designing, acquiring, deploying, and overseeing AI systems. The executive order also seeks to establish testing standards to minimize the risks of AI to infrastructure and cybersecurity. This is where it’s imperative to have an artificial intelligence management system (AIMS).

The world’s first international standard, Information Technology Artificial Intelligence Management System (ISO/IEC 42001:2023) identifies requirements for establishing, implementing, maintaining, and continually improving an artificial (AIMS) within organizations. It’s designed to provide or use AI-based products or services, promoting responsible development and use of AI systems.
This standard tackles challenges such as ethics, transparency, and continuous learning, helping organizations navigate AI's risks and rewards with a balanced approach to innovation and governance. There are things organizations should aim to have when navigating the AI realm.

Organizations should look for:

• Contributing to UN Sustainable Development Goal 9 on industry, innovation, and infrastructure
• Having a framework for managing risk and opportunities
• Demonstrating responsible use of AI
• Traceability, transparency, and reliability
• Cost savings and efficiency gains

Another important piece to implement is proper training. This can safeguard that your employees are equipped with the knowledge and skills to adopt, develop, and deploy AI effectively and ethically. Training courses provide interactive and in-depth learning experiences to upskill and re-skill people through training, something organizations should do to be on top of the ever-changing AI landscape.

Whatever size of your business, establishing, implementing, maintaining, and continually improving an AIMS like ISO 42001 can promote trust in stakeholders and clients, creating a safer and more beneficial environment.