California Lawmaker Reignites Push for Mandatory AI Safety Reports Under SB 1047
The architect of California's SB 1047 is reviving efforts to require AI companies to submit safety assessments, citing growing risks from unchecked AI development and the need for transparent oversight in the absence of federal regulation.
California State Senator Scott Wiener, the driving force behind Senate Bill 1047, has reignited legislative efforts to mandate AI safety reporting from companies developing large-scale artificial intelligence systems. This comes amid mounting global concern about the societal risks of powerful AI models and a continued lack of comprehensive federal oversight. The proposed legislation would require AI developers—particularly those working on models exceeding certain capability thresholds—to submit regular safety assessments, risk disclosures, and red-teaming summaries to state regulators.
Why It Matters
SB 1047 originally made waves earlier this year for proposing one of the most detailed and enforceable frameworks for state-level AI governance in the United States. While the initial bill stalled during committee deliberations, growing public discourse around generative AI risks, hallucination errors, and the lack of transparency from leading labs has breathed new life into the effort. Senator Wiener’s office says the revised push will include bipartisan amendments and greater clarity on enforcement mechanisms.
Key Provisions of the Renewed SB 1047
- Mandatory AI safety reporting for developers of large-scale foundation models (e. g. , those trained with more than 10^26 FLOPs).
- Requirements to disclose model capabilities, limitations, and alignment techniques. - Obligations to submit summaries of adversarial testing (red teaming) focused on risks like autonomous weaponization, deception, or disinformation. - A registry of high-capability AI systems maintained by a new state-designated AI oversight office.
- Civil penalties for non-compliance and misrepresentation. “We can’t afford to wait for Washington”
“California is home to the world’s most powerful AI companies, and we have both the responsibility and the opportunity to lead,” said Senator Wiener in a statement. “If we don’t establish basic safety reporting standards now, we risk letting unchecked AI development run ahead of our ability to govern it.
”
While federal agencies like the FTC and NIST are evaluating AI frameworks, no binding national safety disclosure mandate currently exists—making state-level action crucial in the short term. Support from Academics and Civil Society
The bill has received backing from major AI ethics scholars, digital rights groups, and public interest organizations. “Transparency is the bare minimum,” said Dr.
Safiya Noble, a professor at UCLA and MacArthur Fellow. “Requiring companies to document what their systems can do—and how they fail—is essential to democratic accountability. ”
Groups like the Center for Humane Technology and the Electronic Frontier Foundation (EFF) have voiced tentative support, provided that reporting requirements do not infringe on open-source or academic research.
Pushback from Industry Players
Predictably, some of the state’s biggest tech firms have pushed back. Industry associations argue that the bill risks stifling innovation and placing California at a competitive disadvantage. A spokesperson for a major AI lab, speaking anonymously, said: “We support safety, but we don’t need another reporting bureaucracy layered on top of our already extensive internal risk protocols.
”
Others caution that poorly defined thresholds or overbroad reporting could create legal uncertainty or burden smaller startups. Amendments Expected to Address Concerns
Sources close to the bill say Wiener is working with legislators and industry advisors to clarify: - Which models trigger mandatory compliance - How data is protected (especially trade secrets) - What qualifies as sufficient safety documentation
New language may also carve out academic and non-commercial AI projects, or offer tiered requirements based on deployment risk. Setting a Precedent Nationwide
If passed, SB 1047 could become a template for AI regulation at the state level, just as California’s CCPA (California Consumer Privacy Act) set the tone for privacy legislation across the U.
S. Several other states—such as New York, Massachusetts, and Illinois—are watching closely and may introduce parallel measures depending on how SB 1047 evolves. Conclusion: AI Governance Enters a New Phase
As AI systems grow in scale, complexity, and impact, the absence of formal guardrails has become increasingly untenable.
California’s renewed push under SB 1047 signals a shift from broad ethical principles to concrete regulatory mechanics. While industry pushback is inevitable, the stakes are rising—and with leading labs headquartered in the state, California is poised to be ground zero for the world’s first meaningful attempt to standardize AI safety reporting at scale.