California’s top privacy agency today approved a package of gutted regulations surrounding personal information and automated decisions (ADMT) deployed by businesses.
However, this isn’t what Californians voted for. They voted for meaningful privacy protections that have not been watered down by large companies.
The regulations unanimously approved by the five-member California Privacy Protection Agency still give Californians more protections than most states. Californians are on track to have protections over their personal information and how algorithms use data for the approval or denial of financial lending, employment, housing, education, and healthcare matters.
But the statutory language of the voter-approved California Privacy Rights Act (Prop 24) gave the agency broad authority “issuing regulations governing access and opt-out rights with respect to a business’ use of automated decisionmaking technology.”
The regulations have since been significantly narrowed to simplify rules and reduce business costs, explained the agency. Under initial draft regulations, consumers were granted broader protections from automated decisions, including any decision that involved a person’s geolocation or biometric data, for example.
The regulations now go before the state’s Office of Administrative Law, which has a month to make them final.
Following the most recent 24-day comment period, the privacy agency said itreceived 575 pages of comments from 70 entities. During Wednesday’s meeting privacy agency attorney Philip Laid said it “does not believe additional changes are necessary at this time.”
But changes have been made since the last comment period. Specifically, the regulations surrounding automated decision making technology have narrowed even further. By striking “facilitate” and replacing it with “replace” in the definition of automated decisionmaking technology, the opt out right now appears to only apply to decisions that fully replace a human’s judgment. Companies may argue that significant decisions driven by ADMT are being monitored by humans, but in name only, and will therefore be out of scope of the law. Consumer Watchdog fears that these regulations will actually formalize a future where humans are no longer in the loop.
The road to finalizing the latest regulations began in 2021, when the agency received 900 pages of public comment from industry and privacy-facing groups. But the privacy agency over the course of the last year started to get cold feet. Fears began to grow, starting with board member Alastair Mactaggart, who said the board was going statutorily too far and drafting regulations too burdensome on businesses. Unrelenting industry opposition and a letter from the governor and concerned legislators followed, causing the board to slash pro-consumer protections. Among those opposing the regs were the California Chamber of Commerce and the trade group TechNet, which count Google and Amazon as members. The changes occurred while a shift occurred at the nation’s first privacy agency. Its first director left the position, and its most privacy-focused board member, Vinhcent Le, was removed earlier this year,
As a result, the privacy agency has drifted from its pro-consumer origins to cave to the demands of large companies. It didn’t have to be this way.
Opt out, transparency and appeal rights for automated decisions now only pertain to what the board defines as “significant decisions,” areas of financial lending, employment, housing, education, and healthcare.
But what the board considered “significant decisions” was also narrowed, as it deleted insurance, criminal justice and essential goods from the scope of the law.
Sensitive personal information such as race, immigration status, financial status or location don’t trigger any opt-out rights, unless they fall into the category of significant decisions. Instead, they fall into the purview of risk assessments that companies will have to perform. However, risk assessments will not be publicly disclosed and only disclosed to state regulators if they ask for them.
In addition, the agency loosened consumer data protections surrounding behavioral targeted advertising, and artificial intelligence has been scrubbed from the regs altogether. Previously, workers possessed opt-out rights if they were profiled by AI.
The regulations come as the Trump Administration this week announced a virtual free pass to companies developing AI technology, in the name of winning the so-called AI race.
Board member Drew Liebert described the agency’s regulations as “thoughtful” and “balanced.”
“We’re not done,” said Liebert. “To privacy friends: This is the beginning of the effort. To the business community: we have listened to you.”
Privacy Board Chair Jennifer Urban, who has been at the agency from inception, offered her take on the regulations.
“They are strong. They are reasonable. They are clear.”
The lone opposition came in the form of small trivialities. The board’s newest member, Brandie Nonnecke, voiced concern over the three-year timeline for businesses to submit audits to show compliance.
“Three years seems like a pretty long time,” said Nonnecke. “You might see decisions made in ways you didn’t want to see them made.”
Many industry groups, who opposed various drafts of the regulations through the years, now spoke in support of the rollback in regulations. A number of gig economy workers spoke against the regulations as not going far enough.
TechEquity, a nonprofit that pushes for economic fairness, said the regulations have gotten “weaker and weaker.”
It said companies that use ADMT to analyze insurance claims could fall out of the scope of the law, even if they hire doctors to approve or deny payment for care. According to a news report, Cigna deployed such method. On average, the doctor spent 1.2 seconds reviewing each claim, denying over 300,000 claims.
“The concerns are not hypothetical,” said TechEquity.


















































