Skip to main content

California passes controversial bill regulating AI model training

As the world debates what is right and what is wrong about generative AI, the California State Assembly and Senate have just passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act bill (SB 1047), which is one of the first significant regulations for AIs in the United States.

California wants to regulate AIs with new bill

The bill, which was voted on Thursday (via The Verge), has been the subject of debate in Silicon Valley as it essentially mandates that AI companies operating in California implement a series of precautions before training a “sophisticated foundation model.”

With the new law, developers will have to make sure that they can quickly and completely shut down an AI model if it is deemed unsafe. Language models will also need to be protected against “unsafe post-training modifications” or anything that could cause “critical harm.” Senators describe the bill as “safeguards to protect society” from the misuse of AI.

Professor Hinton, former AI lead at Google, praised the bill for considering that the risks of powerful AI systems are “very real and should be taken extremely seriously.”

However, companies like OpenAI and even small developers have criticized the AI safety bill, as it establishes potential criminal penalties for those who don’t comply. Some argue that the bill will harm indie developers, who will need to hire lawyers and deal with bureaucracy when working with AI models.

Governor Gavin Newsom now has until the end of September to decide whether to approve or veto the bill.

Apple and other companies commit to AI safety rules

Apple Intelligence | OpenAI ChatGPT | Google Gemini | AI

Earlier this year, Apple and other tech companies such as Amazon, Google, Meta, and OpenAI agreed to a set of voluntary AI safety rules established by the Biden administration. The safety rules outline commitments to test behavior of AI systems, ensuring they do not exhibit discriminatory tendencies or have security concerns.

The results of conducted tests must be shared with governments and academia for peer review. At least for now, the White House AI guidelines are not enforceable in law.

Apple, of course, has a keen interest in such regulations as the company has been working on Apple Intelligence features, which will be released to the public later this year with iOS 18.1 and macOS Sequoia 15.1.

It’s worth noting that Apple Intelligence features require an iPhone 15 Pro or later, or iPads and Macs with the M1 chip or later.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Filipe Espósito Filipe Espósito

Filipe Espósito is a Brazilian tech Journalist who started covering Apple news on iHelp BR with some exclusive scoops — including the reveal of the new Apple Watch Series 5 models in titanium and ceramic. He joined 9to5Mac to share even more tech news around the world.

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications