We’re getting our first peek at how the California Privacy Protection Agency (CPPA) is approaching its next round of rulemaking surrounding what control we have over our personal information collected by businesses. Although the agency said rulemaking hasn’t officially begun, a subcommittee has released preliminary language for business compliance regarding automated decisionmaking, risk assessment and cyber security audits. This one is important because the regs deal with algorithms/artificial intelligence, which instantly make decisions for us in crucial areas and are increasingly everywhere. A few takeaways:
A Broad Definition for Automated Decisionmaking
The California Consumer Privacy Act (CCPA) is in the unique position to bring transparency and give consumers more control over rogue algorithms. Now we have some guidance, and that starts with the agency’s working definition of automated decisionmaking technology (emphasis mine):
“Automated Decisionmaking Technology means any system, software, or process—including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques—that process personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking. ADMT includes profiling.”
Translation: Pretty much any algorithm that processes personal information is considered an automated decision under the law. The working definition is a good start and broader than the definition enshrined by its European predecessor, the General Data Protection Authority. The agency is sticking to simply “a decision” instead of the narrower language of a decision that “legally” or “significantly” affects a person under Europe’s law.
But Then it Gets Specific
The agency has identified specific categories of decisions that a business will have to provide information about as well as an opt out for. They include decisions that involve, “financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or contracting opportunities or compensation, healthcare services, or access to essential goods, services, or opportunities.”
It’s good the agency is focusing on areas in which algorithmic discrimination has taken place. So if a potential employer is using AI during the hiring process, it will be within your rights to know more about the AI’s logic and be able to opt out of that hiring algorithm. Or what if you are seeking financial help from an advisor who is using a bank’s own version of ChatGPT? Without the law the average person wouldn’t know that AI is guiding the advice. Now a consumer will be able to know about its logic and be able to say no.
So how will the agency square the broad definition with specific thresholds? During the privacy agency’s July 14 meeting, CPPA board member Lydia de la Torre said automated decision making “will only be triggered by specific thresholds.”
“This is a very complex area. This language is not final,” stressed de la Torre.
Further, if you’re an employee, freelancer, job applicant or student who is being surveilled, or if your behavior, location, movements, or actions in public places are being tracked, you would also be able to access information or opt out of automated decisionmaking, per the agency’s working language.
The previously mentioned thresholds are being recommended by the subcommittee for implementation. The following are being recommended for discussion and appear to be a lower priority:
Rights are granted if a business processes the personal information of consumers that it knows are less than 16 years old, and if the business processes the personal information of consumers to train automated decisionmaking technology.
The agency so far is declining to side with businesses groups who requested the agency narrow opt-out rights to only fully automated decisions. It appears that by saying “whole or part of system to make or execute a decision or facilitate human decisionmaking,” businesses can’t use humans as a loophole to greenlight a decision without doing the appropriate work. Partial automated systems should be included to close this potential loophole. The GDPR, for example, did not include “partial” automated decisions.
But How Will We Know?
Having strong rules is one thing, but what if no one knows about them? The draft rules are silent on any sort of notice mechanism that entities must use to alert consumers about their automated decision rights. Whereas the prevention of personal data by first parties must be noticed through a “Do Not Share/Sell My Information” button on a homepage, that so far appears not to be the case for automated decisionmaking. The agency should work to make it so that users know of their rights before engaging in the decisionmaking process.
How Effectively Will it Address Generative AI?
It’s been an open question whether the agency will seek to tackle generative artificial intelligence like ChatGPT more explicitly. There are some references to it in the board’s working language. Under automated decisionmaking, any artificial intelligence that process personal information is subject to the law. That would appear to apply to any of the training data used by ChatGPT, but so far ChatGPT’s parent company won’t reveal what sort of output data the program is trained on. We don’t know if any of it is based on personal information. But under these risk assessment and cyber security regulations, the public could find out more about the data ChatGPT is using.
Sorry, We’re Definitely Going to Draft Rules
Some industry groups argued the agency’s regulation of automated decisionmaking is illegal because the CCPA only vaguely references opt-out rights for algorithms. That’s not true, as the law plainly directs the agency to issue regulations “governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling.” By merely going forward with working language for automated decisionmaking, the agency dismissed industry’s argument.