AI Safety Institute to set global standards for testing AI models

The UK has established the world’s first AI Safety Institute (AISI) to lead the global effort to ensure the safety of artificial intelligence (AI) models, the technology that powers chatbots like ChatGPT and other applications.

The AI Safety Institute was announced by Rishi Sunak last year, ahead of the global AI safety summit, where the UK played a prominent role in securing a commitment from big tech companies, the EU, and 10 other countries to cooperate on testing advanced AI models before and after their deployment.

The institute’s main goal is to set test standards for the wider world, rather than trying to do all the vetting itself, according to Marc Warner, the chief executive of Faculty AI, a London-based company that assists the AISI in testing AI models.

Why test standards are important for the AI Safety Institute

Warner said that testing AI models is crucial to ensure that they do not violate their safety guidelines or cause harm to users or society. For example, he said that his company helps the AISI to conduct “red teaming”, where specialists simulate misuse of an AI model, such as prompting it to generate harmful or misleading content.

However, he also said that the AI Safety Institute cannot test “all released models” and will focus on the most advanced systems only, due to the limited bandwidth and the fast pace of technology development. Therefore, he said that the institute should put in place standards that other governments and companies can follow, rather than take on all the work itself.

“They can set really brilliant standards such that other governments, other companies … can red team to those standards. So it’s a much more scalable, long-term vision for how to keep these things safe,” he said.

How the AISI is progressing in its mission

Warner praised the AI Safety Institute for making a “really great start” and said that he had never seen anything in government move as fast as this. He also said that his company, which also works for the NHS on Covid and the Home Office on combating extremism, is proud to support the institute’s work.

The AI Safety Institute released an update on its testing program last week, which showed that it had evaluated several AI models from different domains, such as natural language processing, computer vision, and reinforcement learning. The institute also said that it had conducted research and information sharing, and raised the collective understanding of AI safety around the world.

The AI Safety Institute is not the only institute dedicated to AI safety, however. The US has also announced an AI safety institute, which will join the testing program initiated at the summit in Bletchley Park. The US institute will be supported by a consortium of big tech companies, such as Meta, Google, Apple, and OpenAI, which will help the White House meet the goals of its October executive order on AI safety, which include developing guidelines for watermarking AI-generated content.

The UK’s Department for Science, innovation, and Technology said that testing AI models is a key responsibility for governments around the world and that the UK is driving forward that effort through the AI Safety Institute.

“The institute’s work will continue to help inform policymakers across the globe on AI safety,” a spokesperson said.

Strategic Initiatives for Responsible AI Governance in the UK’s 2024 Plans

The UK government has outlined its plans for 2024, detailing a series of actions it intends to take. These include:

  • Further developing the UK’s domestic policy on AI regulation by engaging with a diverse group of experts. This will help in identifying effective interventions for highly capable AI systems.
  • Taking steps to promote AI opportunities and address the associated risks. One such action is establishing a new international dialogue to tackle shared risks related to electoral interference posed by AI. This initiative will be undertaken ahead of the next AI Safety Summit.
  • Strengthening the central function and providing support to regulators. Key regulators will be requested to publish updates on their strategic approach to AI by April 2024. This will ensure transparency and accountability in the regulatory process.
  • Encouraging the adoption of AI and providing support to industry, innovators, and employees. In spring 2024, the government plans to publish an Introduction to AI Assurance and updated guidance on the use of AI within HR and recruitment. This will help organizations effectively utilize AI while ensuring ethical practices.
  • Promoting international collaboration on AI governance. The UK government will support the Republic of Korea and France in organizing the next AI Safety Summits. Additionally, they will continue to foster bilateral and multilateral partnerships on AI with organizations such as the G7, G20, Council of Europe, OECD, United Nations, and GPAI.

These actions demonstrate the UK government’s commitment to harnessing the potential of AI while addressing the associated challenges. By engaging with experts, promoting dialogue, and supporting industry, the government aims to create a conducive environment for responsible AI development and adoption.

Share: