Regulation of AI in the UK:
A non-statutory, principles and outcome -based framework
The UK government’s approach towards regulating AI originated in the Department of Science, Innovation and Technology (DSIT) White Paper consultation entitled “A Pro-Innovation approach to AI Regulation” published in March 2023 (the “White Paper”). Following analysis of extensive feedback to the consultation, the government’s regulatory approach took final shape in its written response of February 6, 2024 entitled, “A Pro-Innovation approach to AI Regulation: government response” (the “Framework”).
Interestingly, the Framework is a principles based, non-statutory and cross-sector framework. Its aim is to balance innovation and safety by applying the existing technology-neutral regulatory framework to AI. It is also an outcome-based framework. What does this mean ? First of all, the UK has taken the view that AI technology is currently too immature for legislating and doing so maybe counterproductive. Therefore, the UK government has opted for no new legislation at present which is in contrast to the freshly minted EU AI Act. It is also diametrically opposite to the EU regulatory approach by adopting a context-specific approach and not risk-based. The White Paper states that the UK’s AI regulatory framework will adopt a context-specific approach instead of categorizing AI systems according to risk. In other words, the UK has decided to not assign rules or risk levels across sectors or technologies. The White Paper also notes that it would be neither proportionate nor effective to classify all applications of AI in critical infrastructure as high risk, as some uses of AI in relation to critical infrastructure (e.g., the identification of superficial scratches on machinery) can be relatively low risk.
Five Core Principles
The Framework starts with its five core principles, namely:
1. Safety, security and robustness
- Providing guidance as to what good cybersecurity and privacy practices look like
- Referring to a risk management framework that AI life cycle actors should apply
- The role of available technical standards to clarify regulatory guidance and support the implementation of risk treatment measures
2. Appropriate transparency and explainability
- Setting expectations for AI life cycle actors to provide information relating to: (a) the nature and purpose of the AI system in question; (b) the data being used; (c) the training data used; (d) the logic and process used; and (e) accountability for the AI system and any specific outcomes
- Setting “explainability” requirements, particularly for higher-risk systems, to ensure appropriate balance between information needs for regulatory enforcement and technical trade-offs with system robustness
- The role of available technical standards to clarify regulatory guidance and support the implementation of risk treatment measures
3. Fairness
- Interpret and articulate what “fair” means with reference to their respective sectors
- Decide in which contexts and instances fairness is important and relevant
- Design, implement and enforce appropriate governance requirements for “fairness” in their respective sectors
- Where a decision involving the use of an AI system has a legal or similarly significant effect on an individual, consider the suitability of requiring AI system operators to provide an appropriate justification for that decision to affected third parties
- Ensure that AI systems comply with regulatory requirements relating to the vulnerability of individuals within specific regulatory domains
- Consider the role of available technical standards to clarify regulatory guidance and support the implementation of risk treatment measures
4. Accountability and governance
- Determine who is accountable for compliance with existing regulation and the principles, and provide initial guidance on how to demonstrate accountability in relation to AI systems
- Provide guidance on governance mechanisms including, potentially, activities in the scope of appropriate risk management and governance processes (including reporting duties)
- Consider how available technical standards addressing AI governance, risk management, transparency and other issues can support responsible behavior and maintain accountability within an organization
5. Contestability and Redress
- Creating or updating guidance with relevant information on where those affected by AI harms should direct their complaint or raise a dispute
- Creating or updating guidance that identifies the “formal” routes of redress offered by regulators in certain scenarios
- Emphasizing the requirements of appropriate transparency and “explainability” in interactions for effective redress and contestability
UK Government’s Strategy for Implementing the Principles
The above 5 principles will be implemented in the Framework by reliance on three main pillars.
1. Leveraging existing regulatory authorities and frameworks
2. Establishing a central function to facilitate effective risk monitoring and regulatory coordination.
3. Supporting innovation by piloting a multi-agency advisory service – the AI and Digital Hub.
- Pillar 1 – Leveraging existing regulatory authorities and frameworks
The UK does not plan to introduce a new AI regulator. Instead, existing regulators such as the Information Commissioner’s Office (ICO), Ofcom (UK telecommunications regulator) and the Financial Conduct Authority (FCA) have been asked to implement the five principles as they regulate AI within their respective domains. The regulators are expected to apply existing laws and regulations while issuing supplementary regulatory guidance.
The regulators were asked by the Government to publish their strategic plans by April 30, 2024 which had to include all of the following: 1) An outline of the measures to align their AI plans with the framework’s principles, 2) An analysis of AI-related risks within their sectors, 3) An explanation of their existing capacity to manage AI-related risks, 4) A plan of activities for the next 12 months including additional AI guidance.
The cross-sector regulators who were required to report are:
- Information Commissioners Office (ICO)
- Competition and Markets Authority (CMA)
- Equality and Human Rights Commission
- Health and Safety Executive
- Office for Product Safety and Standards.
The sectoral regulators who were required to report are:
- Ofcom (UK telecommunications regulator)
- Financial Conduct Authority (FCA)
- Bank of England
- Medicines and Healthcare products Regulatory Agency
- Ofgem (office of gas and electricity markets)
- Office for Nuclear Regulation
- Legal Services Board
- Ofsted (office of standards in education, childrens’ services and skills)
- Ofqual (office of qualifications and examinations regulations)
The above regulators have 12 months to publish AI regulatory guidance.
While the UK government has rejected the idea of legislating at the moment, it anticipates the need for targeted legislative interventions in the future. These legislative interventions are intended to address gaps in the current regulatory framework particularly regarding the risks posed by complex General Purpose AI. The UK government also anticipates introducing a statutory duty on regulators requiring them to have due regard to the five principles.
- Pillar 2 – Central Function to support regulatory capabilities and coordination
The second pillar designed to implement the five principles is a central function to support regulatory capabilities and coordination to be located in the DSIT. The UK government has taken the view that, given the widespread impact of AI, individual regulators cannot fully address the opportunities and risks presented by AI technologies in isolation. To address this issue, the government is setting up a new central function within DSIT to monitor and evaluate AI risks, promote coherence and address regulatory gaps. The central function will, in turn, establish a steering committee.
- Pillar 3 – AI & Digital Hub
The third pillar for implementing the principles is a pilot multi-regulator advisory service called the AI and Digital Hub launched by the Digital Regulators Cooperation Forum (DRCF). The DRCF is comprised of the following four regulators: the Information Commissioners Office (ICO), the Competition and Markets Authority (CMA), Ofcom (UK telecommunications regulator). and the Financial Conduct Authority (FCA). The AI and Digital Hub is intended to facilitate compliance but also foster cooperation among regulators.
It is intended to help innovators navigate regulatory obligations before product launch and will be open to firms meeting the eligibility criteria. This is reminiscent of the regulatory sandbox proposed by the EU AI Act. However, the AI and Digital Hub has an innovative feature of allowing companies to submit queries regarding regulations online and the Hub will respond to the same. The Hub will also publish queries and responses creating a type of regulatory precedent which lawyers will certainly appreciate.
Definition of AI
Unlike the EU AI Act, the Framework does not contain any formal definition of. AI. Instead, the White Paper focuses on two defining characteristics of AI: 1) adaptivity and 2) autonomy. “Adaptivity” refers to the characteristic of AI systems that they often develop the ability to perform new forms of inference not directly envisioned by their human programmers. “Autonomy,” in turn, refers to the fact that AI systems can make decisions without the express intent or ongoing control of a human. The concept of “autonomy” recognizes a reality which has escaped the drafters of the EU AI Act who have hinged the risk based categories on the “intent” of AI developers. Instead, in the UK, regulators are expected to interpret the concepts of “adaptivity” and “autonomy” to craft domain-specific definitions of AI systems.
The White Paper does, however, contain definitions for three types of the most powerful AI systems; 1) Highly capable GPAI, 2) Highly capable narrow AI and 3) AI Agents.
1) Highly Capable GPAI
Foundation models that can perform a wide variety of tasks (an example is Large Language Models (LLM).
Their capabilities can match or exceed those present in today’s most advanced models.
Such models will span from novice through to expert capabilities, with some even showing superhuman performance across a range of tasks.
2) Highly Capable Narrow AI
Foundation models that can perform a narrow set of tasks, normally within a specific field like biology.
Their capabilities can match or exceed those present in today’s most advanced models.
Generally, such models will demonstrate superhuman abilities on these narrow tasks or domains.
3) Agentic AI
An emerging subset of AI technologies that can competently complete multiple sequential steps over long time frames i.e. sending email or instructions to physical equipment to complete a high level task or goal.
These systems can use tools such as coding environments, the internet, and narrow AI models to complete tasks.
Conclusion
In sum, The UK government considers that a non-statutory approach offers “critical adaptability” that keeps pace with rapid and uncertain advances in AI technology. It maintains that legislating now is premature because the risks and challenges associated with AI, regulatory gaps and the ways to address them must be better understood.
However, the UK government’s non-statutory approach is not a complete move away from regulation. On the contrary, the UK government has shifted away from focusing on voluntary measures to future targeted legislative and regulatory intervention. These interventions will be targeted on gaps in the existing regulatory framework and perhaps imposing a statutory duty on regulators to have “due regard” to application of the five principles.
Noticeably absent from the Framework is an approach to regulating GPAI. The UK government has shifted away from a sole focus on voluntary measures to recognizing the need for future legislative intervention particularly with respect to General Purpose AI systems (GPAI). The intervention will likely be aimed at a select group of developers of the most powerful GPAI models. Nevertheless, the UK’s insistence on application of existing regulations and leaving regulation of GPAI to a later date may mean that the proposed Framework is already outdated.
Aparna Viswanathan
Bar at Law (of Lincoln’s Inn), Attorney (admitted in NY, DC, CA)
APARNA VISWANATHAN received her Bachelor of Arts (A.B.) degree from Harvard University and her Juris Doctor (J.D.) from the University of Michigan Law School. She is called to the Bar in England (of Lincoln’s Inn) as well as in New York, Washington D.C., California and India.
In 1995, Ms. Viswanathan founded Viswanathan & Co, Advocates, a firm based in New Delhi. Since then, she has advised over 100 major multinational companies doing business in India and argues cases before the Delhi High Court and the Bombay High Court.