Open Letter Urges Halt on AI

Elon Musk, Steve Wozniak and over 1,400 others call for six-month pause on AI development beyond GPT-4.

On March 29, over 1,400 AI industry leaders and researchers signed an open letter calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

Published by the Future of Life Institute, a Pennsylvania-based nonprofit geared toward steering technology away from large-scale risks, the open letter was endorsed by signatories including:

  • Elon Musk, CEO of Tesla, SpaceX and Twitter
  • Steve Wosniak, co-founder of Apple
  • Yoshua Bengio, AI researcher and director of the Université de Montréal’s Quebec Institute
  • Mark Nitzberg, executive director of the Center for Human-Compatible AI at UC Berkeley
  • Andrew Yang, co-chair of the Forward Party and former U.S. presidential candidate (2020)
  • Yuval Noah Harari, author and professor at Hebrew University of Jerusalem

The open letter is a response to OpenAI, Microsoft, and Google’s recent development of generative AI models. This technology consists of algorithms capable of producing content like images, texts, and videos. Recent announcements in AI development include Google’s mid-March announcement of generative AI features for different Workspace apps, like automatically generating drafts of documents in Docs and Gmail; OpenAI’s late March release of GPT-4, which the company described as safer, more aligned, and 40 percent more likely to produce factual responses than GPT 3.5, and Microsoft’s late March introduction of Microsoft Security Copilot, a cybersecurity program that responds to attacks faster using generative AI.

The open letter describes contemporary AI systems as competing with humans at general tasks. It adds that more advanced AI poses a danger of flooding media with falsities and automating many jobs. Signatories request that the six month pause be public and verifiable and include key actors. They call for governments to institute a moratorium if a pause cannot be enacted quickly.

Further, signatories advocate for AI labs and independent experts to use the break to collaborate on developing and implementing shared safety protocols. These would be overseen by independent outside experts. They also request that developers work with policymakers to accelerate the development of robust AI governance systems, including new and capable regulatory authorities dedicated to AI.

Currently, many Western governments further their understanding of AI through national committees. These include the U.S.’s National AI Advisory Committee (NAIAC), launched in April 2022; Canada’s Advisory Council on AI, created in 2019; and the European Parliament’s Special Committee on Artificial Intelligence in a Digital Age (AIDA), which began work in 2020.

The open letter has sparked numerous online discussions about the potential of halting advanced AI research. One statement of opposition came from Andrew Ng, founder of DeepLearning.AI and Landing AI. Ng tweeted, “The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food…that’ll help many people. Improving GPT-4 will help. Let’s balance the huge value AI is creating vs. realistic risks.”

Emad Mostaque, CEO of Stability AI, an open source generative AI company, signed the letter but tweeted that he did not think a six month pause was the best idea.

He also tweeted that “Nobody is launching runs bigger than GPT-4 for 6-9 months anyway… because it needs new H100/TPU-v5 clusters anyway to get scale above that.”

The H100 is an NVIDIA datacenter GPU built on the company’s most recent Hopper microarchitecture. TPU-v5 refers to the next-gen tensor processing unit, an application-specific integrated circuit (ASIC) developed by Google to accelerate machine learning.

Related: The Great Debate of AI Architecture

Yann LeCun, chief AI scientist for Meta, tweeted that he did not sign the letter because he disagreed with its premise. LeCun later deleted the tweet.

David Deutsch, a visiting professor of physics at the Centre for Quantum Computation at the University of Oxford, tweeted he would not sign the letter because “it reads to me like “we shouldn’t make anything whose effects we can’t prophesy, so let’s begin by giving the world’s totalitarian governments and organised criminals a chance to catch up.” Deutsch added that he thought the letter confuses AI with Artificial General Intelligence (AGI), the ability of an agent to understand tasks that people have developed and comprehended.

The Future of Life Institute has now taken measures to verify signatories of the letter. Before these measures were put in place, several names were added to the list as a joke, including that of OpenAI CEO Sam Altman.