Bloomberg Law – California Privacy Agency Faces Lobbying in Writing AI Rules

By Titus Wu, BLOOMBERG LAW

https://news.bloomberglaw.com/privacy-and-data-security/california-privacy-agency-faces-lobbying-in-writing-ai-rules

Tech companies and privacy advocates are lobbying the California Privacy Protection Agency as it prepares to write new rules to guard against abuses caused by automation and artificial intelligence-powered decisions.

Officials at the newly formed agency are going through feedback following a comment period on how to write regulations around automated decision making, technology that runs itself to follow predetermined rules. It’s a category that includes AI, software in which a machine learns tasks and adapts itself to new ones.

The recently released comments illustrate the different requests the agency board will have to tackle, particularly over how broad or narrow the scope of future rules will be. Industry groups have argued for more flexibility and less requirements. Privacy advocates insist protections need to be strong as the technology becomes prevalent in employment, housing and other aspects of life. 

The task is part of California’s rush to take the lead on crafting laws around artificial intelligence as federal action lags. State lawmakers are also advancing state legislation (A.B. 331) that would set the bar for how businesses use automation, which could conflict with the agency’s work. 

“They’re (the privacy agency) going to keep it in mind when they’re writing regulations,” said Justin Kloczko, privacy advocate for Consumer Watchdog. “That being said, A.B. 331, is it on the books? The (2018) California Consumer Privacy Act is. The agency should still do its due diligence and draft regulations strongly.”

A Right to Regulate?

The state’s 2018 comprehensive privacy law is one of the few, if not only, provisions in California statute to touch upon automation tools, after voters approved updates to the law in 2020 at the same time they created the agency. 

The language is brief and just directs the agency to issue regulations “governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling.” The law also asks for regulations around transparency in how the technology is used and impacts outcomes.

But some business groups question whether the agency’s attempt to regulate AI is legal. The CTIA, which represents the wireless communications industry, argued to the agency that a paragraph asking for regulations does not translate to consumer rights around automated decisions. The CTIA noted that rights around the sale of personal information, in contrast, are explicitly mentioned in the 2018 law. 

“Relying on this grant to creating ADM (automated decision making) access and opt-out rights would be an unconstitutional delegation of authority,” CTIA wrote in its public comment. “The CCPA itself does not create ADM rights, but rather obliquely references them.”

Consumer organizations dismiss that interpretation and contend the text plainly states there are such rights. However, CTIA and others warned that the agency may be on better legal footing to let the pending state legislation be enacted first.

Defining Automation

The biggest debate in the rule writing will likely be determining which automated processes are subject to regulations. Both sides agree the rules should mainly apply to high-risk decisions with “legal or significant effects,” not smaller things like spellcheck or GPS navigation.

Those are decisions that can impact a person in the areas of housing, financial services, education and more. The pending AI bill similarly defines “consequential decisions” as such.

The issue is still tricky because some systems may be fully automated, while others are partially automated with some human involvement. 

Industry trade associations want rules to only apply to fully automated decisions. They argue that human oversight is already part of the safety net to ensure discrimination and other social harms don’t occur. The flexibility would also make it easier for businesses to comply given other jurisdictions, such as the European Union, define it that way. The pending AI bill in the state legislature would give opt-out rights for solely automated decisions. 

Privacy advocates say that limited scope doesn’t go far enough because companies could use a human as a loophole, just to rubber stamp a fully automated decision without doing the necessary work. Partially automated systems have to be included, they added.

‘Pipeline’ vs. Final Decisions

Both sides also disagree over how often companies would have to respect consumer requests for explanations on decisions or opting-out. Business groups made it clear that they should only apply to a final decision instead of every step of the process.

Everyday functions would be unworkable under an elaborate system, said Dylan Hoffman, TechNet’s executive director for California. For example, a person applying for a loan may need to go through multiple automated checks for fraud or any other risks. Reviews and explanations at every step would significantly slow down activity, when at the end of a decision, the applicant could still request human review and explanations.

Intermediary or “pipeline” decisions are just as important in determining outcomes, countered advocates. They contend AI regulations should apply there as well.

“We think that as this process is happening, it all should be transparent to the consumer. And they should know, before a decision is made, what is happening, how are they being profiled, or are they being categorized,” said Kloczko. “I think after (a decision), it’s kind of too late.”

The two sides also tangled over how explanations should be given if a consumer requests information on how AI is being used on them. Companies suggested that general descriptions of what that automation tool collects and how it works would suffice, arguing that getting into specifics would come off as too much jargon for the average person. Advocates disagree, contending that people deserve case-specific explanations why they were rejected or not.

“If these algorithms can instantaneously make decisions, then they should instantaneously tell you how they’re doing that,” said Kloczko.

To contact the reporter on this story: Titus Wu in Sacramento, Calif. at [email protected]

To contact the editors responsible for this story: Bill Swindell at [email protected]; Stephanie Gleason at [email protected]

Latest Videos

Latest Articles

In The News

Latest Report

Subscribe to our newsletter

To be updated with all the latest news, press releases and special reports.

More articles