close
close

topicnews · September 12, 2024

AI safety showdown: Yann LeCun criticizes California’s SB 1047, while Geoffrey Hinton supports new regulations

AI safety showdown: Yann LeCun criticizes California’s SB 1047, while Geoffrey Hinton supports new regulations


Subscribe to our daily and weekly newsletters to receive the latest updates and exclusive content on industry-leading AI coverage. Learn more


Yann LeCun, senior AI scientist at Meta, publicly rebuked supporters of California’s controversial AI safety bill SB 1047 on Wednesday. His criticism came just a day after Geoffrey Hinton, often called the “godfather of AI,” endorsed the bill. This stark disagreement between two pioneers of artificial intelligence highlights the deep divisions within the AI ​​community over the future of regulation.

The California legislature has passed SB 1047, which now awaits Governor Gavin Newsom’s signature. The bill has become the focal point of the debate over AI regulation. It would establish liability for developers of large-scale AI models that cause catastrophic harm if they do not take appropriate safety measures. The legislation only applies to models that cost at least $100 million to train and that operate in California, the world’s fifth-largest economy.

The battle of the AI ​​titans: LeCun vs. Hinton on SB 1047

LeCun, known for his pioneering work in deep learning, argued that many of the bill’s supporters had a “distorted view” of AI’s near-term possibilities. “The distortion is due to their inexperience, their naivety about how difficult the next steps in AI will be, a vast overestimation of their employers’ lead and their ability to make rapid progress,” he wrote on Twitter, now known as X.

His comments were a direct response to Hinton’s support of an open letter signed by over 100 current and former employees of leading AI companies, including OpenAI, Google DeepMind and Anthropic. The letter, delivered to Governor Newsom on September 9, urged him to sign SB 1047 into law, citing potential “grave risks” posed by powerful AI models, such as expanded access to biological weapons and cyberattacks on critical infrastructure.

This public dispute between two AI pioneers underscores the complexity of regulating a rapidly evolving technology. Hinton, who left Google last year to speak more freely about AI risks, represents a growing group of researchers who believe that AI systems could soon pose an existential threat to humanity. LeCun, on the other hand, consistently argues that such fears are premature and potentially damaging to open research.

Inside SB 1047: The controversial bill to redesign AI regulation

The debate over SB 1047 has shaken traditional political alliances. Supporters include Elon Musk, despite his previous criticism of the bill’s author, Senator Scott Wiener. Opponents include Speaker Emeritus Nancy Pelosi and San Francisco Mayor London Breed, as well as several major technology companies and venture capitalists.

Anthropic, an AI company that initially opposed the bill, changed its stance after several amendments, stating that the bill’s “benefits likely outweigh the costs.” This change of course underscores the evolving nature of the legislation and the ongoing negotiations between lawmakers and the technology industry.

Critics of SB 1047 argue that it could stifle innovation and disadvantage smaller companies and open source projects. Andrew Ng, founder of DeepLearning.AI, wrote in TIME magazine that the bill “makes the fundamental mistake of regulating a general-purpose technology rather than the applications of that technology.”

But supporters insist that the potential risks of unregulated AI development far outweigh these concerns. They argue that the bill’s focus on models with budgets above $100 million ensures that it primarily affects large, well-resourced companies that are able to implement robust security measures.

Divided Silicon Valley: How SB 1047 is dividing the tech world

The involvement of employees from companies opposed to the law makes the debate even more complex, suggesting that there are internal disagreements within these companies about the right balance between innovation and security.

As Governor Newsom considers signing SB 1047, he faces a decision that could shape the future of AI development not only in California, but potentially across the United States. With the European Union already moving forward with its own AI law, California’s decision could influence whether the U.S. takes a more proactive or more hands-off approach to AI regulation at the federal level.

The conflict between LeCun and Hinton is a microcosm of the larger debate over AI safety and regulation, highlighting the challenge policymakers face in crafting laws that address legitimate safety concerns without unduly hindering technological progress.

As the AI ​​field continues to advance at a rapid pace, the outcome of this legislative battle in California could set a crucial precedent for how societies navigate the promises and dangers of increasingly powerful artificial intelligence systems. The technology world, policymakers and the public will be watching closely as Governor Newsom weighs his decision in the coming weeks.