Inspiring Innovation: Foundation Models, Empowered and Monitored by the US Government for Accelerated AI Development

The US government is taking steps to regulate the development of a new type of artificial intelligence (AI) that could pose a serious threat to the nation’s security, economy, and health. These AI systems, known as foundation models, are capable of generating text, images, audio, and other content based on large amounts of data.

According to Wired, US Secretary of Commerce Gina Raimondo revealed new details about the government’s plan to monitor the development of foundation models at an event hosted by Stanford University’s Hoover Institute last Friday.

She said that the government will use the Defense Production Act, a law that gives the president the authority to direct the production and distribution of essential goods and services, to survey AI companies.

Raimondo stated that the Defense Production Act is being utilized to conduct a survey that mandates companies to disclose each instance they train a new extensive language model. Furthermore, they are required to provide us with the outcomes, specifically the safety data, for our thorough review.

What are foundation models and why are they risky?

Foundation models are AI systems that can learn from large amounts of data and generate various kinds of outputs, such as text, images, audio, and code. Some examples of foundation models are OpenAI’s GPT-4 and Google’s Gemini, which can power generative AI chatbots and other applications.

However, foundation models also pose significant risks to national security, national economic security, or national public health and safety, as stated in President Biden’s sweeping AI executive order issued last October. The order requires companies developing any foundation model that falls under these categories to notify the federal government and share the results of their safety testing.

Some of the risks associated with foundation models include:

  • Bias and misinformation: Foundation models can inherit and amplify the biases and errors present in the data they are trained on, leading to unfair or inaccurate outputs. For example, a foundation model that generates text based on news articles could produce false or misleading information that could influence public opinion or decision-making.
  • Malicious use: Foundation models can be used for malicious purposes, such as creating fake or harmful content, impersonating or manipulating people, or launching cyberattacks. For example, a foundation model that generates images based on text could create realistic-looking but fake photos or videos that could damage the reputation or credibility of individuals or organizations.
  • Unpredictability and lack of transparency: Foundation models can behave in unexpected or undesirable ways, especially when they encounter novel or complex inputs or situations. Moreover, it is often difficult to understand how or why foundation models produce certain outputs, making it hard to verify, explain, or control their behavior. For example, a foundation model that generates code based on natural language could produce buggy or insecure code that could compromise the functionality or security of software systems.

How will the government regulate foundation models?

The government’s plan to regulate foundation models is part of a broader effort to ensure the responsible and ethical development and use of AI in the US. The plan involves using the Defense Production Act, which was last invoked in 2021 by President Biden to increase the production of pandemic-related protective equipment and supplies, to survey and review the development of foundation models by AI companies.

The survey will require AI companies to share with the government every time they train a new large language model, which is a type of foundation model that can generate natural language and the results of their safety testing. The government will then review the data and assess the potential risks and benefits of the foundation models.

Raimondo also mentioned another aspect of the executive order that would require US cloud computing providers, such as Amazon, Google, and Microsoft, to disclose foreign use of their services. This is to prevent the misuse or exploitation of US-based AI resources by foreign actors, especially those that pose a threat to the US or its allies.

The government’s plan to regulate foundation models is expected to be implemented soon, as Raimondo said that the survey will be sent out “in the next couple of weeks.” The plan is likely to face some challenges and criticisms from the AI industry and community, as it could affect the innovation and competitiveness of US-based AI companies and researchers.

However, the plan also reflects the government’s recognition and responsibility of the immense power and potential of foundation models, and the need to balance them with the public interest and safety.

Share: