Some closely-watched tech bills made the crucial step of getting out of Senate Appropriations yesterday and to the Senate floor, but not before coming out the other end with a little less teeth. Senate Bill 1047, the high-profile bill aimed at regulating future AI systems, was amended following suggestions from the AI company Anthropic. Another bill concerning discriminatory algorithms, AB 2930, was also defanged. Another bill mandating the DMV disclose more autonomous vehicle data also moved forward. Here’s what happened:
Future AI Systems
SB 1047’s aim is to put safeguards and legal liability around the biggest AI systems—ones that haven’t been invented yet—in order to prevent Terminator 2 from coming true. The bill seeks to prevent large AI systems from creating mass casualty events, or cybersecurity events that cost over $500 million in damages, by holding developers liable.
That framework is still intact following amendments to the bill, according to TechCrunch, as a number of changes recommended by Anthropic were incorporated, including loosening legal liability. Before, the California Attorney General could sue for negligence before a catastrophic event occurred, now it can only sue for injunctive relief after an event. Further, safety reports will not have to be submitted by companies under “penalty of perjury.”
The bill also now requires developers to provide “reasonable care” AI models do not pose a significant risk of causing catastrophe, instead of the “reasonable assurance” the bill required before. There is also more protection for open-source models. If someone spends less than $10 million fine tuning a covered model, they are not considered a developer under the law. The responsibility will instead rest on the original model.
The changes come after months of rare intense pushback from Google and Amazon themselves, who usually hide behind trade groups to do their bidding. We saw a lot of scare tactics by tech billionaires repeated in the media like a game of telephone regarding the bill, claims that it will divert innovation to China instead of California, hurt start ups, and even send developers to jail.
But if you look at California’s regulatory history in relation to tech, nothing has stopped companies such as Facebook, Twitter or Google from becoming the most powerful companies in the world who have fundamentally transformed society. And as important as this bill is, it’s not going to stop AI from being the next game-changer.
That’s because the bill won’t stop companies from making money. It won’t stop companies from innovating. There is no computational limit placed on models. Open source models will not be subjected to an emergency shutdown. And developers will not be going to jail. The bill’s author, Scott Wiener, pointed out that developers can already get sued under common Tort law, and the bill could actually help companies avoid getting sued by making sure their systems are safe.
On the other side, the bill’s author, Scott Wiener, is seen as a pro-growth, pro-tech legislator who has faced criticism from the Left as being too cozy to tech business interests. Part of that is because the bill is also largely preserving a framework that allows businesses to self-report its safety practices.
In addition, many companies won’t even be affected by the bill. Only the biggest AI models, costing $100 million to train or more, and with the computational power of a Chat GPT, will have to establish a safe method to train their AI. Then companies must submit risk assessments of their systems detailing how catastrophic harm will be avoided. The bill will develop a body overseeing the development of AI in the state, much like other industries have. And the bill will add important whistleblower protections.
You can read Consumer Watchdog’s support letter here.
Algorithms
Another controversial bill that will force companies to show how their automated decisionmaking technology (ADMT) is not discriminatory or biased passed out of Appropriations. Assembly Bill 2930 (Bauer-Kahan) also authorizes the Civil Rights Department to investigate a report of algorithmic discrimination. However, the bill has been amended to cover only employment matters, and no longer area like financial services, health care or housing. It also doesn’t cover state and local agencies anymore and won’t be enforced by the attorney general.
The bill was modeled after the Biden Administration’s AI Bill of Rights and overlaps with the California Consumer Privacy Act. Companies must submit a risk assessment when its ADMT makes a significant decision concerning a consumer.
But it has faced criticism within the privacy world and has been amended a lot. It appears to have been initially drafted with input from tech companies such as Workday, who circulated a confidential model bill with state lawmakers across the country.
The California Privacy Protection Agency hesitated to support the bill earlier this summer given the bill’s shaky future. It sought enforcement authority as well as stronger language around risk assessments and the definition surrounding an opt out.
Other Bills That Advanced
Autonomous Vehicles
Consumer Watchdog has been calling out the California DMV’s lack of transparency regarding autonomous vehicles for years now. But now things are finally happening. Currently the law doesn’t mandate that the DMV report collision and disengagement data for deployment permits of Autonomous Vehicles (AV). But per Assembly Bill 3061 (Haney), a maker of an AV will have to report to the DMV any accident, traffic violation, disengagement, or harassment of any passenger in the state of California. Such reports will be published online within 30 days. If not, companies will face fines, or suspension of a testing or deployment permit.
The amended bill had a difficult journey making it out on suspense, and now, like a lot of these bills, might find its most difficult hurdle to cross in Governor Gavin Newsom, who has evangelized the need for AI to do its thing.
Protecting Kid’s Data
Assembly Bill 1949 (Wicks), which is aimed at expanding protections for minors whose data is collected by businesses, also made it out of Senate Appropriations. A key portion of that bill eliminated the standard in which businesses didn’t have to actually do any work to know that collected data belongs to a minor. However, the bill has been amended to reinstate the so-called knowledge standard.
But some, including the privacy agency, supported the bill after declaring it should maintain the standard, or something in the alternative. One contention was that eliminating the standard would be bad for privacy because businesses would collect more data in order to verify that a consumer is minor. Others have contended that it doesn’t really matter because companies already suck up tons of data.
Under the bill, businesses will have to obtain authorization from those between the ages of 13-18 years old, or the parents of those under the age of 13, before collecting any of the minor’s personal information. The use and disclosure of a minor’s personal information is also prohibited unless authorization is given.
In addition, many companies won’t even be affected by the bill. Only the biggest AI models, costing $100 million to train or more, and with the computational power of a Chat GPT, will have to establish a safe method to train their AI. Then companies must submit risk assessments of their systems detailing how catastrophic harm will be avoided. The bill will develop a body overseeing the development of AI in the state, much like other industries have. And the bill will add important whistleblower protections.
You can read Consumer Watchdog’s support letter here.