Throughout history, major technological advancements have reshaped societies, economies, and industries. In relatively recent human history, from the Industrial Revolution to the rise of the internet, these shifts have driven remarkable progress but also invariably had unintended consequences. Amidst the current explosion of artificial intelligence (AI) and the coming disruption of quantum technology, it is critical that we learn valuable lessons from past transformations to ensure new technologies are adopted safely, ethically, and sustainably.
Society has responded to these disruptive revolutions by building an international system of standardization, assurance and regulation (collectively known as the ‘quality infrastructure’) in parallel, which can continue to perform a critical role in balancing the risks and rewards of these new tools in humanities arsenal.
Learning from the past
The Industrial Revolution, which began in the late 18th century, marked a transition from agrarian societies to industrialized economies. The introduction of machinery into manufacturing increased production capabilities and led to rapid urbanization. However, this also brought severe social and environmental consequences, including poor working conditions, child labour, and pollution, underscoring the need for proactive human-centric intervention to protect vulnerable populations.
In response, governments established labour laws and regulatory bodies such as the UK’s Factory Act of 1833 and the US Interstate Commerce Commission in 1887. These set the precedent for ensuring that technological progress does not come at the cost of human well-being. Similarly, as AI, quantum technology and other emerging digital solutions transform industries, a human-centric approach must be taken, with careful consideration given to how to appropriately regulate to prevent unethical labour practices, data exploitation, and widening economic disparities.
The widespread adoption of electricity in the late 19th and early 20th centuries revolutionized daily life, improving living standards, extending productive hours, and fostering new industries. However, the rapid deployment of electrical infrastructure also introduced safety hazards, including fires and electrocution due to poorly regulated installations.
The establishment of safety standards and regulatory bodies, such as the National Electrical Manufacturers Association (NEMA) and the UK’s Electric Lighting Act of 1882, helped to mitigate these dangers. AI poses similar challenges in terms of reliability, bias, and cybersecurity threats. Just as safety standards were implemented for electrical devices, AI must be developed within a framework of ethical guidelines and technical safeguards to help ensure responsible use.
The automobile industry in the early 20th century demonstrated another example of the risks of unchecked innovation. Mass production of cars led to increased mobility and economic growth, but also introduced traffic accidents, congestion, and environmental pollution.
Over time, regulatory interventions such as mandatory seat belts, vehicle safety standards, and emissions regulations helped to address these concerns. The lesson for the digital revolution is clear: while technologies like AI have the potential to enhance efficiency and decision-making, unchecked deployment could lead to unforeseen consequences. Regulatory bodies can address this by taking an active role in ensuring AI technologies prioritize different implications including environmental, safety, transparency and accountability.
The rise of the internet brought unparalleled global connectivity, information sharing, and economic opportunities. However, it also created challenges related to privacy, misinformation, and cybersecurity. Initially, the internet was developed without built-in protections, leading to widespread data breaches and security vulnerabilities. Over time, organizations such as the Internet Corporation for Assigned Names and Numbers (ICANN) and regulatory frameworks like the General Data Protection Regulation (GDPR) emerged to safeguard users and enforce responsible data handling. We can learn from this and integrate ethical and security considerations into technology developments from the outset, rather than retroactively addressing them.
Different approaches to address similar challenges
Historically, markets have approached technological advancements differently. In the UK, a phased and regulated approach was taken with telecommunications, transitioning from a state monopoly to a competitive market overseen by a regulatory body, ensuring innovation while managing risks. The US has generally focused on self-regulation with government oversight when necessary, emphasizing competitive fairness and fostering innovation. The European Union has consistently prioritized harmonization and consumer protection, ensuring unified regulations across member states. Each of these approaches offers insights into how AI governance, and future quantum governance, could be structured, balancing innovation with safety and ethical responsibility.
Utilising tried and tested governance mechanisms
The role of standards bodies and the Testing, Inspection and Certification (TIC) industry has been crucial in ensuring that past technological advancements benefited society while minimizing harm. These organizations have provided testing, certification, and regulatory frameworks that have guided industries towards integrating safer and more sustainable practices. As AI becomes increasingly integrated into daily life, these institutions, with the contributions of groups like the AIQI Consortium, can play a vital role in establishing industry-wide best practices, verifying compliance with ethical guidelines, and maintaining public trust in AI, while shaping an approach that could later extend to quantum.
Adopting lessons learnt to provide a springboard for progress
The overarching lesson is that while technological progress brings immense benefits, it delivers impact when accompanied by foresight, regulation, and ethical considerations. AI and quantum technologies present unprecedented opportunities. With proactive governance, society can prevent these from exacerbating inequalities, compromising security, or creating ethical dilemmas. By learning from past societal shifts, we can ensure that technological advancements continue to serve humanity responsibly, equitably, and sustainably.
Read about public perceptions on AI and quantum technology advancements in our Innovating for our future report.