The UK government plans to regulate AI via new policies on “responsible use.” They labeled AI as one of the “technologies of tomorrow,” citing its £3.7bn ($5.6bn) contribution to the UK economy in the previous year.
AI’s rapid growth worries critics, who fear it could threaten jobs or facilitate malicious intent. AI encompasses computer systems that perform tasks requiring human intelligence, such as chatbots that comprehend questions and provide human-like answers and systems that recognize objects in images.
The White Paper
The Department for Science, Innovation, and Technology released a white paper suggesting regulations for general-purpose AI systems like ChatGPT. With AI’s rapid development, concerns about potential threats to safety, privacy, and human rights have surfaced. There’s apprehension that AI, trained on large biased datasets scraped from the internet that contains offensive content, could exhibit biases against certain groups.
Misinformation can be created and spread using AI, prompting experts to call for its regulation. However, the government is concerned that a patchwork of legal controls may confuse businesses, hindering AI’s full potential.
Instead of appointing a new single regulator, the government proposes that existing regulators, such as the Equality and Human Rights Commission, Health and Safety Executive, and Competition and Markets Authority, develop their own AI governance approaches. These regulators will rely on current laws without being granted new authority.
Proposed Principles Outlined in the White Paper
The white paper presents five principles that regulators should follow to facilitate secure and innovative AI utilization in their respective industries:
Security, Safety, and Robustness: AI applications must operate safely, securely, and robustly, with proper risk management, to ensure their reliability.
Transparency: AI-developing and deploying organizations must convey when and how they use AI and fully explain the decision-making process. The level of detail should match the risks involved with AI use.
Fairness: AI use must comply with UK laws, such as data protection or equality, and not induce discrimination or unfair commercial outcomes.
Accountability: Necessary measures must be in place to ensure proper oversight of AI usage and accountability for its outcomes.
Contestability: Clear avenues must exist for people to challenge AI-generated decisions or outcomes that cause harm.
In the coming year, regulators will publish practical instructions to organizations outlining how to implement these policies in their industries.