Responding to Clearview AI’s Comments Concerning Consumer Watchdog’s Report on Clearview AI’s Abuses of Our Privacy Rights

Published on

In the wake of the publishing of Consumer Watchdog’s report on Clearview AI, both the company’s CEO, Hoan Ton-That, and General Counsel, Jack Mulcaire, objected to the report. In this blog, we respond to Mr. Ton-That’s and Mr. Mulcaire’s statements, identify questions that remain unanswered, and highlight the lack of principles underpinning the whole endeavor. Ultimately, this all serves to underscore the importance of the Attorney General and California Privacy Protection Agency investigating.

Mr. Ton-That and Mr. Mulcaire each argued that “images of Californians who have exercised their right to opt-out of our data processing are blocked from any subsequent collection.”[1] This claim goes to the heart of why government regulators need to investigate Clearview AI, as there is no other way to accurately assess the veracity of Clearview AI’s claims. As stated in the report, it is entirely unclear how Clearview AI would be able to block the images of Californians who opted out of subsequent collection.

To wit: Clearview AI’s opt-out/deletion form states that “[t]o find any Clearview AI search results that pertain to you (if any), we cannot search by name or any method other than image–so we need an image of you,” and that “[w]hen we are done processing your request, the photos you shared to facilitate processing will be deleted.”[2] How can Clearview AI possibly block the images of Californians who opted out of subsequent collection if Clearview AI has no way to determine whose information it is collecting, and deletes the photos shared in order to identify the data to delete? It is entirely unclear how Clearview AI could be unable to identify consumers without a picture, but simultaneously can ensure it doesn’t collect information from consumers who opted out, without any apparent way to identify who those individuals are. In other words, Clearview AI operates by automatically scraping any public images from the web. How can it ensure it does not re-scrape previously deleted images (following an opt-out) if, as Clearview AI claims, it has no way to identify who is in the images? Indeed, even if Clearview AI scraped only new images, it would ultimately create the same biometric scan as from older images—the exact data that individuals sought to delete and preclude the sale of. 

Mr. Ton-That also objected to the report’s “assertions about facial recognition accuracy” because “Clearview AI’s algorithm has been assessed by the National Institute of Standards and Technology, a U.S. government office, and found to be highly accurate across all demographics.”[3] But as was recently pointed out by Senator Ed Markey, “that testing is not designed to replicate real-world conditions,”[4] and as stated in the report, “the accuracy of Clearview depends on the quality of the image that is fed into it – something Mr. Ton-That accepts.” 

Additionally, Mr. Ton-That’s statement that Clearview AI takes steps to “enforce responsible usage of facial recognition”[5] is entirely unpersuasive. Mr. Ton-That is essentially arguing that his own private business should be the arbiter and regulator of whether law enforcement is engaging in “responsible usage.” Letting a business regulate its own product is never a good idea, but it is particularly problematic when the product is millions of people’s biometric information and the customer is law enforcement. Indeed, Clearview AI’s “promises to be a more careful curator of its huge image repository have largely been hollow and unfulfilled[.]”[6] A recent Governmental Accountability Office report on the use of facial recognition technology found that multiple agencies using Clearview AI, including the FBI, ATF, DEA, and Secret Service, did not require any training on the use of facial recognition technology.[7] Lawmakers and regulators need to step in and act on the public’s behalf, not leave it in the hands of Clearview AI.

Additionally, Mr. Ton-That stated to Politico: “It is important to note that Clearview AI does not possess any information regarding the age of any people who appear in any public online photo that we have collected.”[8] At this point, it is important to distinguish between technical legal arguments and plain common sense, because from a pure common-sense standpoint, that is clearly a bogus argument. After all, what other information is needed to determine whether someone in a photo is a child apart from the photo itself? We all know what an eight-year-old, or a six-month-old, looks like—you don’t need to go check their birth certificates to confirm. (Of course, there are edge cases, particularly with teenagers, but by and large, a photo is sufficient evidence of age.) 

So what exactly is Clearview AI trying to argue? Indeed, Mr. Ton-That notably did not deny that Clearview AI collects and sells images/biometric information of children. Rather, he made a purely technical legal argument—that Clearview AI lacks “actual knowledge”[9] that it sells children’s information because it “does not possess any information regarding the age of any people” appearing in the photos it collects. In other words, Mr. Ton-That is essentially arguing that a photo alone (even if linked to a social media account) is not sufficient to allow Clearview AI to determine the age of persons in the photo—they need the birth certificate too. We believe that, at best, this constitutes a “willful disregard” of the ages of the many of the children whose images/biometric information Clearview AI shares and sells, which is sufficient to constitute a legal violation.[10]

But arguing over technical legal issues obscures the more important big picture—while Clearview AI may not know the exact date of birth of every person whose information it collects, Clearview AI indisputably knows that its database is filled with the information of minors, and it has made that a central component of its marketing strategy. While Consumer Watchdog believes that Clearview AI has violated the letter of the law, it is undeniable that Clearview AI has run roughshod over the spirit of the law. 

The failure to appreciate broader principles reflects an issue that was not addressed in the Consumer Watchdog report, because it goes beyond legal issues, but that is nonetheless central to the problem with Clearview AI—a concerning lack of moral underpinnings. If you read Clearview AI’s marketing today, you might think the company always intended to cater to law enforcement and government agencies—as Mr. Mulcaire was sure to note: “all our clients are from government or law enforcement.”[11] You might also believe that Clearview AI was designed primarily to help fight child exploitation—as Mr. Ton-That emphasized: “Our mission is to protect children.”[12]

You might believe those things because that’s what Clearview AI wants you to believe. But Clearview AI did not enter the marketplace and stake out its position as a tool for law enforcement to fight child exploitation—it was backed into that corner after massive public outcry and legal setbacks. In fact, before Clearview AI was publicly exposed, it “courted a range of clients including real estate firms, banks and retailers.”[13] Notably, “some very wealthy and powerful people were among the first to know [Clearview AI] existed,” including “[b]illionaires, Silicon Valley investors, and a few high-wattage celebrities.”[14] Clearview AI entered the market for one purpose—not to save children or aid government, but simply, purely, to make money. 

And it is not as if Clearview AI being used by only law enforcement and government agencies is a palliative that relieves all concerns about the company. Clearview AI has done something not even our own government would—assembled a comprehensive database of Americans’ biometric information. What we have now is the worst of both worlds—our government has access to a massive biometric database that it could not have legally assembled itself, but it is not directly in control of or responsible for any of the data. Thus, the government can use Clearview AI while avoiding the need to justify its own collection and processing of its citizens’ data. If this sort of biometric database is to exist, it should be wholly subject to and the responsibility of our duly elected government, not a private business. 

Additionally, while it is apparently true that, as Mr. Mulcaire stated, “all our clients are from government or law enforcement,” this is only because of scathing public criticism and multiple lawsuits, including one by the American Civil Liberties Union that Clearview AI settled by agreeing to limit the sale of its product within the USA to only government or law enforcement agencies.[15] Why is Mr. Mulcaire’s statement only “apparently” true? Because as noted, the limitation on sales to private entities or individuals applies only within the United States. Although Clearview AI claims it does not sell its product to private entities or individuals outside the United States, neither Consumer Watchdog nor any government agency knows the extent to which Clearview AI is telling the truth. This is precisely why government regulators need to investigate what Clearview AI is doing with the biometric information of the millions of Americans in its database.

Further, we have no idea how many other countries’ governments might be using Clearview AI. We do know that Clearview AI is being used by Ukraine in its war against Russia. While many Americans likely see no issue with that, we should all be concerned about the potential of our own government using Clearview AI in similar ways against us. Ukraine began using Clearview AI after Mr. Ton-That sent them a letter advertising Clearview AI’s services; the first scenario he identified that Clearview AI could be helpful for was “Identifying Infiltrators”[16] —something that should frighten everyone who believes in civil liberties. A recent report stated that Clearview AI has been used “to identify more than 230,000 Russians on [Ukrainian] soil as well as Ukrainian collaborators.”[17] It is but a small step from offering to “identify infiltrators” for Ukraine to offering to “identify dissidents” for a re-elected Donald Trump.

Indeed, the absence of principles underlying Clearview AI’s business is perhaps most evident from its apparent willingness to shift with the political climate. In responding to the report, Mr. Ton-That stated: “I was appalled by the tragic events on January 6th and the attack on the Capitol and our democracy,” and championed the use of Clearview AI to “identify the Capitol rioters.” So, Mr. Ton-That must be a liberal then? As this picture showing Mr. Ton-That (on the right) celebrating Donald Trump’s election should make clear, not quite.

As the Huffington Post reported, “[b]y 2015, [Ton-That] had joined forces with far-right subversives working to install Trump as president.”[18] Clearview AI came into being during Donald Trump’s 2016 presidential campaign, with Ton-That and cofounder Charles Johnson attending the 2016 Republican National Convention to see Trump’s nomination, “where Johnson introduced Ton-That to the billionaire tech investor Peter Thiel, who later provided seed money for the company that became Clearview.”[19] Mr. Ton-That continued to operate in far-right circles throughout the 2010s.

So, is Mr. Ton-That a conservative? If so, he has admittedly not proven to be a very committed one. Most likely, Mr. Ton-That is just like his company—lacking in overarching guiding principles apart from “what will benefit us the most.” However, one thing he has proven consistent on is his willingness to violate people’s privacy—as far back as 2009, he had created a phishing website to trick users into providing access to their Gmail accounts.[20] While Clearview AI may be helping to identify January 6th rioters now, who knows what they could be doing in one year, or five?

At the end of the day, we cannot allow Clearview AI to pull the wool over all our eyes and have us believe that Clearview AI is just out there performing a public service. This is a company built and predicated on one of the most grievous invasions of personal privacy in this country’s history. We cannot allow our law enforcement to be reliant on a private company with a proprietary secret algorithm when trying to solve crimes. Most importantly, we cannot allow our children to lose control of their biometric information before they even know what the word biometric means. It is time for California’s regulators to act to prevent this continuing and unconstitutional invasion of our personal privacy.


[1] Alfred Ng, Call to Investigate Clearview AI, Politico, Dec. 5, 2023, https://consumerwatchdog.org/in-the-news/politico-call-to-investigate-clearview-ai/California: Consumer Watchdog requests AG and CPPA to take action against Clearview AI, DataGuidance, Dec. 8, 2023, https://www.dataguidance.com/news/california-consumer-watchdog-requests-ag-and-cppa-take

[2] Privacy Web Form, Clearview AI, accessed Feb. 26, 2024, https://privacyportal.onetrust.com/webform/1fdd17ee-bd10-4813-a254-de7d5c09360a/2a09e1a7-f09f-4e0c-91a2-5818abe414d5

[3] Stephanie Sierra, AG Bonta called to investigate Clearview AI for allegedly selling images to police without consent, ABC 7 News, Dec. 28, 2023, https://abc7news.com/clearview-ai-california-attorney-general-rob-bonta-consumer-watchdog/14231840/

[4] Senator Ed Markey, Letter to Clearview AI, Nov. 20, 2023, https://www.markey.senate.gov/imo/media/doc/senator_markey_letter_to_clearview_ai_-_112023pdf.pdf; see also Senator Raphael Warnock, Letter to Attorney General Garland, Jan. 18, 2024, https://www.warnock.senate.gov/wp-content/uploads/2024/01/1.18.24-Letter-to-DOJ-re-Facial-Recognition-and-Title-VI.pdf (noting that facial recognition “technologies can be unreliable and inaccurate, especially with respect to race and ethnicity,” and raising the concern that the use of facial recognition technology may violate Title VI of the Civil Rights Act).

[5] See fn. 3.

[6] David Strom, The rise and fall of Clearview.AI and the evolution of facial recognition, Silicon Angle, Oct. 2, 2023, https://siliconangle.com/2023/10/02/rise-fall-clearview-ai-evolution-facial-recognition/

[7] U.S. Government Accountability Office, Facial Recognition Services, GAO-23-105607, Sept. 2023, https://www.gao.gov/assets/gao-23-105607.pdf. According to the report, the ATF, DEA, and Secret Service “halted their use of [Clearview AI]” as of April 2023.

[8] See fn. 1, Call to Investigate.

[9] Civ. Code § 1798.120, subd. (c).

[10] Ibid.

[11] See fn. 1, California: Consumer Watchdog.

[12] Joel R. McConvey, California watchdog has strong words for Clearview, but CEO says they’re mistaken, Biometric Update, Dec. 6, 2023, https://www.biometricupdate.com/202312/california-watchdog-has-strong-words-for-clearview-but-ceo-says-theyre-mistaken

[13] Kashmir Hill, Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich, New York Times, Mar. 5, 2020, updated Mar. 6, 2020, https://www.nytimes.com/2020/03/05/technology/clearview-investors.html

[14] Kashmir Hill, A Shazam for People: Clearview’s AI App Was a Hit Among the Rich and Powerful, Rolling Stone, Sept. 25, 2023, https://www.rollingstone.com/culture/culture-features/clearview-ai-app-privacy-your-face-belongs-to-us-excerpt-1234829211/

[15] ACLU v. Clearview AI, Inc., 2020 CH 04353 (Cir. Ct. Cook Cty., Ill.), Signed Settlement Agreement, May 5, 2022, https://www.aclu.org/cases/aclu-v-clearview-ai?document=Exhibit-2-Signed-Settlement-Agreement

[16] Hoan Ton-That, Offer to Assist Ukraine with Facial Recognition, Mar. 1, 2022, https://app.hubspot.com/documents/6595819/view/443117283?accessId=f27bac

[17] Vera Bergengruen, How Tech Giants Turned Ukraine Into an AI War Lab, Time Magazine, Feb. 8, 2024, https://time.com/6691662/ai-ukraine-war-palantir/

[18] Luke O’Brien, The Far-Right Helped Create The World’s Most Powerful Facial Recognition Technology, Huffington Post, Apr. 7, 2020, https://www.huffpost.com/entry/clearview-ai-facial-recognition-alt-right_n_5e7d028bc5b6cb08a92a5c48

[19] Kashmir Hill, What We Learned About Clearview AI and Its Secret “Co-Founder,” New York Times, Mar. 18, 2021, updated Oct. 28, 2021, https://www.nytimes.com/2021/03/18/technology/clearview-facial-recognition-ai.html

[20] Owen Thomas, The person behind a privacy nightmare has a familiar face, San Francisco Chronicle, Jan. 22, 2020, https://www.sfchronicle.com/business/article/The-person-behind-a-privacy-nightmare-has-a-14993625.php

Other Litigation Areas

Latest Litigation Articles

Litigation In The News

Support Consumer Watchdog

Subscribe to our newsletter

To be updated with all the latest news, press releases and special reports.

More Litigation articles