ads

Opinion: The risks of AI could be catastrophic. We should empower company workers to warn us


Editor’s Note: Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School and the author of the book ”They Don’t Represent Us: Reclaiming Our Democracy.” The views expressed in this commentary are his own. Read more opinion at CNN.

In April, Daniel Kokotajlo resigned his position as a researcher at OpenAI, the company behind Chat GPT. He wrote in a statement that he disagreed with the way it is handling issues related to security as it continues to develop the revolutionary but still not fully understood technology of artificial intelligence.

On his profile page on the online forum “LessWrong,” Kokotajlo — who had worked in policy and governance research at Open AI — expanded on those thoughts, writing that he quit his job after “losing confidence that it would behave responsibly” in safeguarding against the potentially dire risks associated with AI.

And in a statement issued around the time of his resignation, he blamed the culture of the company for forging ahead without heeding the warning about the dangers it might be unleashing.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” Kokotajlo wrote.

OpenAI pressed him to sign an agreement promising not to disparage the company, telling him that if he refused, he would lose his vested equity in the company. The New York Times has reported that equity was worth $1.7 million. Nevertheless, he declined, apparently choosing to reserve his right to publicly voice his concerns about AI.

When news broke about Kokotajlo’s departure from OpenAI and the alleged pressure from the company to get him to sign a non-disclosure agreement, the company’s CEO Sam Altman quickly apologized.

“This is on me,” Altman wrote on X, (formerly known as Twitter), “and one of the few times I’ve been genuinely embarrassed running openai; I did not know this was happening and I should have.” What Altman didn’t reveal is how many other company employees/executives might have been forced to sign similar agreements in the past. In fact, for many years and according to former employees, the company has threatened to cancel employees’ vested equity if they didn’t promise to play nice.

Altman’s apology was effective, however, in tamping down attention to OpenAI’s legal blunder of requiring these agreements. The company was eager to move on and most in the press were happy to oblige. Few news outlets reported the obvious legal truth that such agreements were plainly illegal under California law. Employees had for years thought themselves silenced by the promise they felt compelled to sign, but a self-effacing apology by a CEO was enough for the media, and the general public, to move along.

We should pause to consider just what it means when someone is willing to give up perhaps millions of dollars to preserve the freedom to speak. What, exactly, does he have to say? And not just Kokotajlo, but the many other OpenAI employees who have recently resigned, many now pointing to serious concerns about the dangers inherent in the company’s technology.

I knew Kokotajlo and reached out to him after he quit; I’m now representing him and 10 other current and former OpenAI employees on a pro bono basis. But the facts I relate here come only from public sources.

Many people refer to concerns about the technology as a question of “AI safety.” That’s a terrible term to describe the risks that many people in the field are deeply concerned about. Some of the leading AI researchers, including Turing Prize winner Yoshua Bengio and Sir Geoffrey Hinton, the computer expert and neuroscientist sometimes referred to as “the godfather of AI,” fear the possibility of runaway systems creating not just “safety risks,” but catastrophic harm.

Decoding generative artificial intelligence

And while the average person can’t imagine how anyone could lose control of a computer (“just unplug the damn thing!”), we should also recognize that we don’t actually understand the systems that these experts fear.

Companies operating in the field of AGI — artificial general intelligence, which broadly speaking refers to the theoretical AI research attempting to create software with human-like intelligence, including the ability to perform tasks that it is not trained or developed for — are among the least regulated, inherently dangerous companies in America today. There is no agency that has legal authority to monitor how the companies develop their technology or the precautions they are taking.

Instead, we rely upon the good judgment of these corporations to ensure that risks are adequately policed. Thus, as a handful of companies race to achieve AGI, the most important technology of the century, we are trusting them and their boards to keep the public’s interest first. What could possibly go wrong?

This oversight gap has now led a number of current and former employees at OpenAI to formally ask the companies to pledge to encourage an environment in which employees are free to criticize the company’s safety precautions.

Their “Right to Warn” pledge asks companies:

First, to commit to revoking any “non-disparagement” agreement. (OpenAI has already promised to do as much; reports are that other companies may have similar language in their agreements that they’ve not yet acknowledged.)

Second, it asks companies to pledge to create an anonymous mechanism to give employees and former employees a way to raise safety concerns to the board, to regulators and to an independent AI safety organization.

Third, it asks companies to support a “culture of open criticism,” to encourage employees and former employees to speak about safety concerns so long as they protect the corporation’s intellectual property.

Finally — perhaps most interestingly — it asks companies to promise not to retaliate against employees who share confidential information when raising risk-related concerns, but pledges that employees would first channel their concerns through a confidential and anonymous process — if, and when, the company creates it. This is designed to create the incentive to build a mechanism to protect confidential information while enabling warnings.

Get our free weekly newsletter

Such a “Right to Warn” would be unique in the regulation of American corporations. It is justified by the absence of effective regulation, a condition that could well change if Congress got around to addressing the risks that so many have described. And it is necessary because ordinary whistleblower protections don’t cover conduct that is not itself regulated.

The law — especially California law — would give employees a wide berth to report illegal activities; but when little is regulated, little is illegal. Thus, so long as there is no effective regulation of these companies, it is only the employees who can identify the risks that the company is ignoring.

Even if the AI companies endorsed a “Right to Warn,” no one should imagine that it would be easy for any current or former employee to call out an AI company. Whistleblowers are not favorite co-workers, even if they are respected by some. And even with formal protections, the choice to speak out inevitably has consequences for their future employment opportunities — and friendships.

Obviously, it is not fair that we rely upon self-sacrifice to ensure that private corporations are not putting profit above catastrophic risks. This is the job of regulation. But if these former employees are willing to lose millions for the freedom to say what they know, maybe it is time that our representatives built the structures of oversight that would make such sacrifices unnecessary.

Post a Comment

0 Comments