A Carnegie Mellon University professor holds a crucial position in the tech industry today if you believe artificial intelligence presents significant dangers to humankind.
TL;DR
- Zico Kolter, a Carnegie Mellon professor, leads OpenAI's safety panel with power to halt AI releases.
- This panel's decisions are now crucial for California and Delaware approvals of OpenAI's new business structure.
- Kolter's committee addresses diverse AI risks, from weapons to mental well-being impacts.
- OpenAI's restructuring aims to balance profit with its foundational mission of AI safety.
At OpenAI, Zico Kolter heads a four-member panel empowered to stop the release of new AI systems if deemed unsafe. This could involve technology potent enough for malicious actors to create weapons of mass destruction, or a new chatbot so flawed it negatively impacts mental well-being.
“Very much we’re not just talking about existential concerns here,” Kolter said in an interview with The Associated Press. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.”
OpenAI appointed the computer scientist to lead its Safety and Security Committee over a year ago. However, the role became significantly more important last week when regulators in California and Delaware designated Kolter's supervision as a crucial element of their approvals for OpenAI to form a new business its structure, facilitating easier capital raising and profit generation.
Since its inception a decade ago as a nonprofit research lab aiming to develop AI superior to humans for humanity's benefit, safety has been a core tenet of OpenAI. However, following the widespread commercial surge in AI triggered by ChatGPT's release, the company has faced accusations of expediting product launches without complete safety assurances to maintain its leading position. The internal conflicts that resulted in CEO Sam Altman's brief removal in 2023 amplified these worries about a departure from its foundational mission.
The San Francisco-based entity encountered opposition, notably a lawsuit from co-founder Elon Musk, as it initiated measures to transition into a more conventional for-profit business to further its technological development.
Agreements announced last week by OpenAI along with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to assuage some of those concerns.
Central to the formal pledges is an assurance that safety and security decisions will precede financial matters as OpenAI establishes a new public benefit corporation, which is effectively governed by its nonprofit OpenAI Foundation.
Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions, according to Bonta’s memorandum of understanding with OpenAI. Kolter is the only person, besides Bonta, named in the lengthy document.
Kolter stated that the accords mostly affirm the safety committee he established last year will keep its existing powers. The three other members also serve on OpenAI's board; one is former U.S. Army General Paul Nakasone, who previously led U.S. Cyber Command. Altman's departure from the safety panel last year was viewed as granting it greater autonomy.
“We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter said. He declined to say if the safety panel has ever had to halt or mitigate a release, citing the confidentiality of its proceedings.
Kolter indicated that numerous issues concerning AI agents will require attention in the upcoming months and years, spanning from cybersecurity – “Could an agent that encounters some malicious text on the internet accidentally exfiltrate data?” – to the security implications related to AI model weights, which are numerical parameters that shape an AI system's functionality.
“But there’s also topics that are either emerging or really specific to this new class of AI model that have no real analogues in traditional security,” he said. “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”
“And then finally, there’s just the impact of AI models on people,” he said. “The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”
This year, OpenAI has already encountered criticism regarding its main chatbot's conduct, including a lawsuit alleging wrongful death filed by parents in California whose son died by suicide in April following extensive exchanges with ChatGPT.
AI became a subject of study for Kolter, who heads Carnegie Mellon's machine learning department, during his freshman year at Georgetown University in the early 2000s, well before it gained popularity.
“When I started working in machine learning, this was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”
Kolter, aged 42, had been tracking OpenAI for a considerable time and was acquainted with its founders, even attending its 2015 launch event at an AI conference. Nevertheless, he hadn't anticipated the swift pace of AI's development.
“I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.
AI safety advocates will be closely watching OpenAI’s restructuring and Kolter’s work. One of the company’s sharpest critics says he’s “cautiously optimistic,” particularly if Kolter’s group “is actually able to hire staff and play a robust role.”
“I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Calvin, who OpenAI targeted with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.
“Some of these commitments could be a really big deal if the board members take them seriously,” Calvin said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”
