close
close

California’s AI law threatens to endanger open source innovation

California’s AI law threatens to endanger open source innovation

This month, the California State Assembly will vote on whether Senate Bill 1047the Safe Innovations Act for Breakthrough Artificial Intelligence Models. While the proponents all Changes made “in direct response to” the concerns of the “open source community,” critics of the law argue that it would bring the development of open source AI models to a halt.

In its current form, SB 1047 would discourage developers from making their models open source by mandating complex security protocols and making developers liable for harmful modifications and misuse by malicious actors. The bill offers no concrete guidance on the types of features or guardrails developers could build in to avoid liability. As a result, developers would likely prefer to keep their models closed rather than risk being sued, hampering startups, and slowing down domestic AI development.

SB 1047 defines open source AI tools as “artificial intelligence models that are made freely available and can be freely modified and redistributed.” The bill requires developers who make models available for public use—open source—to implement safeguards to manage risks posed by “causing or enabling the creation of derivatives of covered models.”

The California bill would make developers liable for harm caused by “derivatives” of their models, including unmodified copies, copies “modified after training,” and copies “combined with other software.” In other words, the bill would require developers to have superhuman foresight to anticipate and prevent malicious actors from modifying or using their models to cause a wide range of harm.

The bill is more specific in its demands. It would require developers of open source models to take appropriate security precautions to prevent the “production or use of chemical, biological, radiological or nuclear weapons,” “mass casualties or damages amounting to at least five hundred million dollars (US$500,000,000) from cyberattacks,” and comparable “threats to public safety and order.”

The bill also requires developers to take steps to prevent “critical harm”—a vague catch-all term that courts could interpret broadly to hold developers liable unless they build countless, undefined guardrails into their models.

In addition, SB 1047 would impose extensive reporting and auditing obligations on open source developers. Developers would be required to identify the “specific tests and test results” used to prevent critical harm. The bill would also require developers to provide an annual “certification, under penalty of perjury, of compliance” and self-report “any artificial intelligence security incident” within 72 hours. Starting in 2028, developers of open source models would be required to “annually engage a third-party auditor” to confirm compliance. Developers would then be required to annually reassess the “procedures, policies, safeguards, capabilities, and security measures” put in place under the bill.

In the last few weeks politician And Technologists have publicly denounced SB 1047 because it threatens open source models. Representative Zoe Lofgren (D-Calif.), ranking Democrat on the House Science, Space and Technology Committee, explained: “SB 1047 would have unintended consequences because of its treatment of open source models… This bill would restrict this practice by making the original developer of a model liable for any party’s misuse of its technology. The natural reaction of developers will be to stop releasing open source models.”

Fei-Fei Lee, a computer scientist who is commonly considered the “godmother of AI,” said“SB-1047 will hinder open source development” by making AI developers “much more reluctant to write code and collaborate” and “crippling AI research in the public and academic sectors.” A group of students and faculty from the University of California, as well as researchers from over 20 other institutions, further elaborated on this point: Saying that SB 1047 would prevent “the publication of open source models to the detriment of our research” because the law’s “onerous restrictions” would force AI developers to “publish their models under licenses.”

Meanwhile, the federal government continues to extol the value of open-source artificial intelligence. On July 30, the Commerce Department’s National Telecommunications and Information Administration released a report that advises the government to “refrain from immediately restricting the broad availability of open model weights in the largest AI systems” and stresses that “openness of the largest and most powerful AI systems will harm competition, innovation, and the risks of these revolutionary tools.” In a recent blog post, the Federal Trade Commission writes completed that “open-weighted models have the potential to drive innovation, reduce costs, increase consumer choice, and generally benefit the public – as we have seen with open source software.”

The provisions of SB 1047 are lengthy, ambiguous, and create a labyrinth of regulations that require developers to anticipate and mitigate future harms that they neither intended nor caused. Developers would then have to submit detailed reports to the California government explaining and certifying their efforts. Even developers who attempt to comply in good faith may be denied due to the vague Nature and scope of the bill.

This bill presents itself as a panacea for the harm caused by advanced AI models. Instead, the bill would be a death sentence for open source AI development.

Leave a Reply

Your email address will not be published. Required fields are marked *