By Titus Wu, BLOOMBERG LAW
California lawmakers have encountered roadblocks this year in their efforts to put guardrails around artificial intelligence and automation, thanks to a combination of a poor fiscal climate and pushback from business interests.
The Golden State, known for enacting first-in-the-nation laws on tech policy and home to Silicon Valley, has been closely watched by observers in the AI space. The state seemed poised for action at the beginning of the year, but any significant AI policy initiative will probably have to wait until 2024, at the earliest.
Legislators had proposed measures to fight algorithmic discrimination and to give residents privacy protections, including the right to opt-out of AI tools.
California’s experience serves as a cautionary tale for other states, some of which have proposed their own AI measures. This state-level activity comes as the federal government lags behind on regulating a technology that’s grown in prominence while also raising significant concerns. Senate Majority Leader Chuck Schumer (D-N.Y.) has said he hopes a comprehensive federal AI bill will appear “in the coming months.”
“We should have moved forward this year to start the process of reining in some of this AI and how it’s affecting Californians,” said Assemblymember Rebecca Bauer-Kahan (D), who authored the state’s algorithmic discrimination bill. “And waiting a whole another year, especially at the pace we see AI coming to market and changing the way people do business, is potentially detrimental.”
Despite this year’s inaction, privacy advocates still say California is a state to watch, especially as the California Privacy Protection Agency slowly begins drafting regulations around automated tools. Lawmakers in Sacramento are signaling they’re ready to tackle the topic again next year, too.
“One of the things that is cool about our agency, and probably many people don’t know this in the country, but we’re essentially the de facto AI regulator for the country,” said Alastair Mactaggart, a board member on the state privacy agency.
Setbacks for Legislation
Initiatives to set standards on AI and automation were dealt a significant setbackduring the May budget hearing process, when top lawmakers swiftly and secretly decided which bills that would cost the state more money would move on or die. The challenge was exacerbated by a state budget deficit of $32 billion.
That process killed the most-watched piece of AI legislation, Bauer-Kahan’s bill (A.B. 331), which would have put California at the forefront on regulating businesses’ use of AI technology. Her measure would require anyone using an automated tool to conduct impact assessments. The legislation would also require advance notice if a tool were used to make a “consequential decision” in important areas like hiring or education.
Tech lobbyists who opposed the measure pinned the blame on the bill itself, calling it poorly drafted and putting California in the backseat on AI innovation. They said the language was too broad in its definitions, and some lawmakers shared the tech industry’s concerns over whether companies could comply with the bill given the rapidly evolving technology, said Peter Leroe-Muñoz, who oversees tech policy for the Silicon Valley Leadership Group.
“I think this bill was clearly not ready for prime time when it was introduced,” said Carl Szabo, vice president at NetChoice. “You could see from the outset, just in the definition of AI, that it is written so broadly, that it could essentially be applied to something as simple as an Excel spreadsheet.”
Additionally, there were concerns over the measure’s private right of action, which would have let California residents sue over algorithmic discrimination, Bauer-Kahan acknowledged. But she said her bill struck the right balance between fighting harmful behavior and supporting innovation.
She chalked up the bill’s failure to the budget deficit, as a legislative analysis put its cost to the state at upwards of $20 million. She noted that the measure enjoyed strong Democratic support and argued that it would have passed in a better fiscal year.
“We were asking [the state Civil Rights Department] to come in and to really oversee the use of these automated decision tools,” she said. “And that came with an increased need for staffing and expertise for the agency that would have fiscal ramifications.”
Another unsuccessful bill (S.B. 313) would have established an Office of Artificial Intelligence and set rules on how state agencies use the technology. Its author, Sen. Bill Dodd (D), also pointed to the intense financial resources that would have been required to establish the office. Only one AI-focused bill, (A.B. 302), still is viable; it would ask the state to take an inventory of high-risk automated systems used by state agencies.
The California Privacy Protection Agency will likely take up the state’s main AI work this year as it figures out how to detail opt-out rights for automation as directed by the state’s updated privacy law, such as when a job applicant wants to opt out of resume screening tools.
The agency also faced a setback after a state judge ruled that any regulations it put forth could only be enforced after a one-year delay—meaning even when the it finishes rulemaking on automation, those privacy rights won’t take immediate effect.
“It just kicks the ball down the road, and it gives companies more time to traffic in your data,” said Justin Klockzo, who follows tech policy for Consumer Watchdog, on the court ruling. “It’s unfortunate that technology moves so fast, especially regarding data and AI, but consumers can’t really have the tools to keep up with that right now.”
The agency on Friday previewed a possible framework for its rules around automation, including defining “automated decision-making technology” as a system, software, or process that takes personal data and uses computation as a whole or partly to make or help make decisions.
Privacy advocates praised that definition for including partially automated systems, because companies could insert a human to rubber-stamp automated decisions and avoid regulation. Some tech groups, however, have called the definition too broad, similar to their concerns over the Bauer-Kahan bill.
The agency board on Friday also discussed what uses of AI software would require offering consumers a right to opt out or require an explanation. Board members recommended those rights start when AI is used for a decision that would significantly impact everyday living—such as in finance, housing, or employment—and for tracking or monitoring consumers and employees.
The agency may also target the more novel forms of generative AI—which produces original? content such as text and images—that have generated public buzz, such as ChatGPT, as it debates over whether using personal data to train AI is appropriate.
Both Dodd and Bauer-Kahan say they will tackle AI again in their bills next year. Bauer-Kahan said she plans on reintroducing her bill, likely with changes, as there’s now more time to get things right. She said she’s looking at ways to reduce costs on the Civil Right Department or other state agencies, especially with more efficient enforcement mechanisms.
Business groups are also taking action. Leroe-Muñoz said his organization has created an AI working group that hopes to engage with federal and state policymakers on the topic. The goal is for any future legislation to be precise and easy to comply with, he said.
Even though California is pressing pause this year on AI, the state is eager to continue its work given that Congress will not likely act, lawmakers said.
“I am not wanting to believe that Congress will break their deadlock to actually do something meaningful to protect the people of the United States,” Bauer-Kahan said. “And so, California has to lead the way.”
To contact the reporter on this story: Titus Wu in Sacramento, Calif. at [email protected]
To contact the editor responsible for this story: Bill Swindell at [email protected]