The soap around OpenAI continues apace. Shortly after several executives departed once more, former board member Helen Toner talks about the mess the AI company brought on itself.
Helen Toner was one of the proponents among OpenAI’s board for the firing of CEO Sam Altman in late 2023. After a brief power struggle, Altman returned. Toner and other directors left the company soon afer.
Disclosed on Twitter
However, the turmoil within OpenAI had been going on for much longer. Even the board of directors knew nothing about ChatGPT’s sudden launch. “When ChatGPT came out in November 2022, the board was not informed in advance about that,” Toner tells The Ted AI Show podcast.
The chatbot proved to be a hit, demonstrating to a global audience what generative AI has to offer. ChatGPT was the fastest-growing app of all time and now offers the ability to “naturally” have a conversation with the chatbot, discuss images and sift through documents. Its core technology fuels Microsoft’s gargantuan Copilot efforts that aim to bring AI functionality to every Windows machine.
Tip: ChatGPT now talks in real-time, new GPT-4o model available for free
While Altman and his inner circle logically could not have known just how successful ChatGPT would be, Toner’s revelation is telling. OpenAI constantly argues about the importance of secure AI development and calls for clear legislation to make it happen, but is itself contributing to a climate in which AI applications are being sent out at lightning speed.
Security initiative
Two weeks ago, co-founder of OpenAI Ilya Sutskever also left. As one of the founders of GenAI, he served as Chief Scientist beginning in 2018, which put him in charge of developing new AI models for OpenAI.
Both the exodus of higher-ups and criticism from former employees seem to have forced OpenAI into action. Yesterday, the company announced that it has established a “Safety and Security Committee”. Within the next 90 days, this committee will develop recommendations regarding AI safety. It’s unlikely it will criticize the decision to leave the OpenAI board in the dark about ChatGPT, however.
After all, this committee consist only of OpenAI insiders who survived the power struggle of late 2023. Sam Altman is joined by chairman Bret Taylor and fellow board members Adam D’Angelo and Nicole Seligman.
In passing, OpenAI confirmed that it has begun training its newest frontier model, presumably GPT-5. This would continue down the path toward “AGI,” or artificial general intelligence. OpenAI’s hope is that, in time, it will develop an intelligence that surpasses the skills of the human brain. All the more reason to look closely at AI security, although the new committee is simply a self-policing force without any outside influence.