The recent voluntary commitments by tech companies deploying advanced artificial intelligence are just that: voluntary. And that’s the problem.
While companies such as Open AI, Meta and Google have agreed to promote “safety, security and transparency” regarding their AI technology with the Biden Administration, the agreement rings more like a virtue signaling public relations move than meaningful action. Much of the things that have been agreed upon have already been done by these companies, and much of the promises scan as vague.
Take the agreement on security testing, for example. Companies commit to “internal” and “external” testing before models are released. But many companies already do their own testing. Nothing is said in the agreement about who will do the testing or what is required of it. The same goes for the commitment to “information sharing” between companies and governments. What kind of information will have to be shared? There is no specificity.
Another commitment regarding “watermarking” content that is AI has the appearance of being good for consumers. However, the agreement states that “AI assistants” such as chatbots are “outside the scope of this commitment.” Further, AI detection tools are not great so far, and the concern is that each advancement in generative AI will outsmart AI detection capabilities.
Another commitment to make public the uses and risks of AI models is also open to interpretation. Who are these reports for? The commitment doesn’t say how specific companies must get, or who, if anyone, they must answer to when demonstrating such risks.
The commitments also include a vow to use AI to help combat society’s greatest existential problems, such as disease and climate change. But of course that is already the public intention of these companies. That is how AI has been sold to us in the first place, as a tool to end poverty or improve science. The problem is once this technology is out of the bag, it is no longer in the company’s control, or society’s. The commitment comes off as hollow.
But we already have an idea of what to do about AI. The federal government has released its AI Risk Management Framework as well as an AI Bill of Rights. These documents go in-depth on how to monitor AI systems through various stages of the process, including involving external parties not part of a development team. Companies should show evidence that their AI is transparent, safe and secure via thorough results-driven documentation. A total of seven characteristics under the AI Risk Management Framework defines “trustworthy AI,” including “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful biases managed.” Companies should also test their algorithms for discrimination, and allow people to opt out of decisions made by an AI.
The White House just has to make good on its promise for an executive order so that way companies are required to comply with these guidelines the government has already outlined. But without any legislation or executive orders, this pledge is just telling powerful companies, “Yea, we trust you.” And that hasn’t worked well in the past.