In a white paper titled “A pro-innovation approach to AI regulation,” issued on March 29, the UK’s Department for Science, Innovation, and Technology (DSIT) outlines the nation’s objectives for regulating AI systems. The study acknowledges the difficulties of a horizontal, one-size-fits-all regulatory framework by concentrating on the use of AI rather than the technology itself.
The article offers the concept of a regulatory sandbox where foundational AI enterprises can test regulations before going to market and allows domain-specific regulators to take the lead in adjusting the execution of flexible rules and public expectations.
This context-driven approach is important, but the study also emphasises that “tools for trustworthy AI including assurance techniques and technical standards will play a critical role in enabling the responsible adoption of AI and supporting the proposed regulatory framework.” The RAI Institute is actively monitoring this new area of study and is prepared to provide a variety of AI assurance services and knowledge to promote responsible innovation. We regularly monitor evolving AI regulations and advice in our Regulatory Tracker in order to match the needs of our evaluations and certification programme with regulatory goals. The UK’s initiatives to emphasise instruments for reliable AI and to shed a practical light on the future are admirable.
A set of five guiding, non-statutory principles for AI regulators is one of the methods described in the article. These principles are safety, security, and robustness; openness and explainability; fairness; accountability and governance; and contestability and redress. With responsible AI as the cornerstone of innovation and a deepening of our current cooperation to support interoperability measures, these principles are a mirror image of the RAI Institute’s own implementation methodology.
The report announced many important deliverables and investments in addition to the principles:
a public consultation session for people and businesses to provide feedback on the white paper from 29 March 2023 to 21 June 2023;
a roadmap for AI regulation that will be released concurrently with the UK’s answer to the consultation and contain information on how the principles will be put into practise as well as the budget for a $2 million regulatory testbed;
Over the next 12 months, concrete guidelines on AI regulation will be released, including risk assessment templates; and
A portfolio of AI assurance strategies that will show how these techniques are already being used in practical use cases will be released in the spring of 2023.
The discussion of tools for trustworthy AI to support compliance within business and civil society in Part Four of the study, which directly connects to our standards creation and conformity assessments, is of particular relevance to the RAI Institute. In addition to existing technical standards, the paper defines assurance techniques as “impact assessment, audit, and performance testing along with formal verification methods,” which aid regulators in creating “sector-specific approaches to AI regulation by providing common benchmarks and practical guidance to organisations.” We are eagerly expecting the publication of assurance procedures and assessment templates, and we are updating our regulatory tracker to reflect changes to the UK’s policy.
We are excited to see assessment tools for responsible AI brought to the forefront of governance efforts and the outlining of a three-tiered, “layered” strategy to promote the sustainable adoption of responsible practises: sector-agnostic standards to support cross-sectoral principles, followed by more tailored standards to address contextual issues like bias and transparency (e.g., ISO/IEC TR 24027:2021), and ultimately strengthened by sector-specific standards.
According to the study, existing regulatory agencies like the Financial Conduct Authority (FCA) of the UK should encourage the deployment of responsible AI tools inside their own industry. Policymakers, standard-setters, business professionals, and researchers from the UK and Canada recently met at the RAI Institute to discuss how each country is currently governing AI. This meeting, which was organised as a regulatory roundtable, was held at the FCA with assistance from the Foreign Commonwealth and Development Office of the UK. We exchanged ideas, decided on areas for development, and made plans for future meetings with the aim of developing a coordinated strategy for AI in financial services and across industries. To effectively and methodically put AI principles into practise, we’ll keep working together with experts in different fields.
The paper lays forth precise next steps. In between six and twelve months, DIST hopes to “publish proposals for the design of a central M&E framework including identified metrics, data sources, and any identified thresholds or triggers for further intervention or iteration of the framework.” The initial layer of AI governance can be explored, and general concepts can start to be applied, by organisations that are buying, selling, building, or collaborating with AI systems.