New AI Law Probably Headed for Veto

California’s SB 1047, also called the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” has made its way through both the state’s Assembly and Senate. Now, it’s up to Governor Gavin Newsom to decide whether to sign it into law or veto it. He has until September 30 to make his decision.

So, what’s this bill all about, and why does it matter for workers, especially those in the tech sector?

The Goal of SB 1047

SB 1047 is designed to regulate large-scale artificial intelligence (AI) models that require massive amounts of computing power and resources to develop. The idea is that AI developers should be held accountable for the way their models are used, especially if they have the potential to cause harm. The bill targets AI models trained above certain thresholds, both in terms of computing power and cost.

In practical terms, developers would have to prove that their models wouldn’t lead to "hazardous capabilities." This could mean anything from misuse in cyberattacks to creating security vulnerabilities or worse—using AI for unethical or dangerous purposes. The bill introduces a bunch of safeguards and testing requirements to ensure that these AI models are safe and used responsibly.

Whistleblower Protections: A Big Win for Employees

One of the key pieces of SB 1047 that really stands out, especially for employees, is the whistleblower protections. These protections are essential for anyone who might see something problematic happening at their workplace—whether that’s an unsafe AI model being developed or a company cutting corners on safety standards.

Here’s how the whistleblower protections work:

  • Freedom to Speak Up: If you’re an employee working on one of these massive AI projects and you notice something that could cause serious harm, you have the right to report it. You can take your concerns directly to the California Attorney General or the Labor Commissioner. Your employer can’t stop you or retaliate against you for doing so.

  • Protection from Retaliation: If your company tries to fire you, demote you, or take any action against you because you raised a concern, you can petition a court for temporary relief. This means you could get a quick ruling to protect your job and prevent the company from punishing you while the issue is being sorted out.

  • Internal Reporting Systems: Companies will be required to set up reasonable, anonymous ways for employees to report concerns internally. If someone sees something that could be a violation of the law, misleading statements about the AI model’s safety, or a failure to disclose potential risks, they can report it without fear.

Employers also have to inform employees about these rights and responsibilities. This information has to be posted in the workplace, and employees must acknowledge it every year in writing. So, it’s not something that can just be hidden in the fine print. Workers will be reminded, regularly, of their ability to speak up if something’s wrong.

What Kinds of AI Models Are We Talking About?

The bill covers large AI models that require massive resources to develop. Specifically, it applies to models trained using more than $100 million worth of computing power, or models fine-tuned with more than $10 million worth of resources.

To put it simply, we’re talking about AI on a scale that only a few companies—think Google’s DeepMind, Meta, Anthropic, and OpenAI—are building. These are the frontier AI models that could have significant impacts on everything from cybersecurity to healthcare, or even public infrastructure.

And here’s the thing: these companies are under increasing pressure to get these models out fast, sometimes at the expense of safety. SB 1047 aims to make sure that they take a step back and make safety and responsibility a priority.

Who’s Backing the Bill?

It’s interesting to see that over 120 current and former employees from top AI companies like OpenAI, Anthropic, Google’s DeepMind, and Meta have come out in support of SB 1047. These are the people who know firsthand the risks associated with developing these cutting-edge technologies. They’ve said that they believe the most powerful AI models could potentially lead to severe risks, like giving people access to biological weapons or enabling large-scale cyberattacks.

In their view, it’s completely reasonable to require companies to test whether their AI models could cause serious harm and to implement safeguards to prevent those risks from becoming reality. Many of these employees know that companies are often laser-focused on innovation, sometimes at the expense of safety, and they believe SB 1047 is a step in the right direction.

A group of respected academics in the field has also come forward in support of the bill. While they admit that SB 1047 doesn’t address every possible risk, they see it as a “solid step forward.”

Who’s Against It?

Unsurprisingly, not everyone is thrilled about SB 1047. Some major players in the tech industry, like OpenAI, have opposed the bill, saying it could stifle innovation. Their argument is that placing too many regulations on AI development could slow down progress and make it harder for companies to compete in a rapidly evolving global market.

But let’s be real: when companies talk about “stifling innovation,” what they often mean is that regulations might make it harder for them to cut corners or push out new products as quickly as they’d like. The reality is that without proper safeguards, these AI models could cause real harm, and it’s often employees—the ones on the ground, building and testing these technologies—who are the first to see those risks.

Other opponents of the bill include big names like San Francisco Representative Nancy Pelosi, Mayor London Breed, and the U.S. Chamber of Commerce. Tech advocacy groups and trade associations like the Software & Information Industry Association have also voiced opposition.

What’s Next?

Now, it’s up to Governor Newsom to decide. If he signs SB 1047 into law, it will set new standards for AI development in California and, likely, across the country. Workers in the tech industry will have stronger protections if they see something unsafe happening with AI models and want to speak up.

The bill could also lead to better safety standards overall, forcing companies to take a closer look at how they’re building and deploying AI systems. And while some might argue that it will slow innovation, the reality is that a safer, more responsible approach to AI development will ultimately benefit everyone—especially the employees who are often the first to face the risks.

In the end, SB 1047 is about balance. It’s about making sure that innovation doesn’t come at the expense of safety or workers’ rights. And for employees in the tech industry, especially those working on the front lines of AI development, that’s a big win.

Previous
Previous

What Employees Should Know About Workplace Changes Under a Trump Second Term

Next
Next

SEC Cracks Down On those Who Would Silence Whistleblowers