OpenAI had a dedicated team studying the catastrophic risk of models becoming too powerful to be supervised. Not anymore: Team lead Jan Leike quits, slamming the firm's AI safety culture
Digital giants are packaging up responsible AI as if it's just another product feature but Gen AI leader OpenAI no longer has a dedicated team studying what its CTO Mira Murati admits are "catastrophic risks". Former Super Alignment team lead Jan Leike quit last week, slamming the leadership's commitment to AI safety. "Nothing to see here" say founders Sam Altman and Greg Brockman, both previously fired by their own hand-picked board allegedly over concerns about their commitment to OpenAI's altruistic mission. Investors subsequently fired the board and reinstated the pair. Everything is totally normal.
What you need to know
- ChatGPT developer OpenAI believes AI models will become so powerful it will be beyond our ability to supervise them.
- While the risks are catastrophic, according to OpenAI CTO Mira Murati in a recent speech, that didn't stop OpenAI from disbanding the team whose job was to study the risks last week.
- It happened after Team Lead Jan Leike quit, citing the firm's lack of commitment to AI safety.
- Leike, in a post explaining his departure said, "Building smarter-than-human machines is an inherently dangerous endeavour."
- OpenAI founders Sam Altman and Greg Brockman played down the risks saying, "We take our role here very seriously and carefully weigh feedback on our actions."
- Murati also revealed three things about the emergence of generative AI that surprised her.
I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point... Building smarter-than-human machines is an inherently dangerous endeavour.
ChatGPT inventor OpenAI has a team dedicated to understanding the potentially catastrophic risks for when the models become so powerful we can no longer supervise them, according to CTO Muri Murati.
Or at least they did when Murati gave the speech last month. That team has now disbanded less than a year after it was founded following the resignation of the executive in charge of the team, Jan Leike, who has slammed OpenAI's culture around safety.
Speaking at the recent QualtrcisX4 event in Salt Lake City, Murati told delegates: "We need to recognise however that OpenAI’s goal of developing goes hand-in-hand with ensuring the safety of these models and making them more aligned with human values."
"The things we're trying to avoid are harmful misuses," Murati continued. "And even worse, the kind of catastrophic risks where the models get so powerful, we lose the ability to understand what's going on. And the models are no longer aligned with our values and making sure they're making life great for us instead of the opposite."
She referenced Leike's team as focused on the kinds of catastrophic risks that could emerge once the models become so powerful can no longer supervise them. “This is called alignment research. And it's the study of aligning the models with user behaviour, user guidance, and user values. That's actually incredibly complex, not just from a technical standpoint, but also governance," Murati said.
Cue Karma
Unfortunately for Murati - and maybe the rest of humanity - the people charged with keeping the dystopian future at bay keep resigning because of their lack of faith in the commitment of OpenAI's leaders and investors to AI safety. Leike, the firm's head of alignment and super alignment, did just that this week.
He didn't mince words. "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," Leike wrote in a post. "I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.
"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there. Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done."
Building "smarter-than-human machines is an inherently dangerous endeavour", according to Leike, "OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products."
He ended by saying: "We are long overdue in getting incredibly serious about the implications of AGI. We must prioritise preparing for them as best we can. Only then can we ensure AGI [artificial general intelligence] benefits all of humanity. OpenAI must become a safety-first AGI company."
OpenAI founders Altman and Brockman, both of whom were fired by the board over concerns about Altman's commitment to OpenAI's altruistic mission, before being reinstated after investors fired the board - issued a "nothing to see here" statement.
"We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities. We will keep doing safety research targeting different timescales. We are also continuing to collaborate with governments and many stakeholders on safety," the statement read.
"There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward. We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions."
Expect the unexpected
As if to reinforce OpenAI's leaders are - to an extent - making this up as they go along, Murati also identified three things that have most surprised her about the emergence of generative AI during her Qualtrics X4 speech. These were the utility of the models, the speed with which they are being integrated into the economy, and how rapidly the technology has burst into public consciousness demanding a response from regulators.
Murati said OpenAI bet large on the scaling paradigm. “That’s the idea that you throw a tonne of compute and data at this large language models and there will be these emerging capabilities, the models will become more powerful and be able to do more things across different domains," she explained.
But while it's one thing to predict such an outcome, it was another thing entirely to see it happen. “When you test them across different domains, you see that the models can rhyme," Murati said. "They can do extremely well in biology tests, or math tests, a lot of tests we use ourselves to test people in colleges. There's always some magic and something surprising when you see the scaling paradigm work out in reality.”
Murati described the difference in capability between GPT 3.5 and GPT4 as a step change ushering in the emergence of reasoning capabilities in different domains. “And we should probably expect another step change in as we go into the next models and scale up more from sort of social perspective.”
She was also surprised at how rapidly generative AI has been into the economy and how much impact it is already having. "As long as you have access to energy and internet, use can use some version of GPT either for free or for a small subscription fee. And that's really different from how we've developed technology before.”
Normally, Murati said, it takes a lot more time for a new technology to penetrate society. “The third thing I would say is how much and how quickly it has burst into the public consciousness and how that is affecting the regulatory framework.”
As to what the most significant transformations will be initially, Murati said there is a very big opportunity for our relationship with knowledge, and creativity to be transformed and that this can be applied to every domain.
“But I'm personally very excited about the potential in education as well as healthcare. Because you know, the opportunities here to really change the quality of life for people all over the world are huge. And we have this sort of big ideal, but we're still quite far from it," she added.