A recent article by The Washington Post highlights increasing concerns about OpenAI’s safety protocols.
Despite being one of the leaders in AI development and striving to create human-level intelligence, OpenAI faces mounting criticism from its employees regarding safety measures. The Washington Post’s latest piece, based on information from an anonymous source, alleges that OpenAI prioritised product celebration over thorough safety testing.
An anonymous employee shared with The Washington Post that “they planned the launch after-party prior to knowing if it was safe to launch,” and that “we basically failed at the process.”
Safety issues at OpenAI appear persistent and troubling. A group of current and former employees recently signed an open letter urging the startup to improve its safety and transparency practices. This comes in the wake of the dissolution of OpenAI’s safety team, following the departure of cofounder Ilya Sutskever. Shortly after, Jan Leike, a prominent OpenAI researcher, resigned, citing that the company’s focus on safety had been overshadowed by a push for shiny new products.
OpenAI’s charter emphasises safety and pledges to support other organisations in safety advancements if another entity achieves artificial general intelligence (AGI) first. The company also claims dedication to addressing safety challenges in complex systems. However, the decision to keep proprietary models private for safety reasons has drawn criticism and legal challenges. Despite these claims, there are signs that safety may not be as prioritised as the company asserts.
The stakes surrounding AI safety are enormous, according to OpenAI and other experts. A report commissioned by the US State Department in March warned, “current frontier AI development poses urgent and growing risks to national security.” It compared the potential destabilising effects of advanced AI and AGI to those of nuclear weapons.
These safety concerns come after a boardroom upheaval last year that temporarily removed CEO Sam Altman. The board cited his lack of transparency, which then sparked an investigation that did little to alleviate staff concerns.
OpenAI spokesperson Lindsey Held told The Washington Post that the GPT-4o launch did not compromise safety, though another company representative admitted the safety review process was hurried and even condensed into a single week. “We are rethinking our whole way of doing it,” the anonymous representative told the Post. “This [was] just not the best way to do it.”
Amidst these ongoing controversies, OpenAI has attempted to reassure the public with strategic announcements. This week, it revealed a collaboration with Los Alamos National Laboratory to explore the safe use of advanced AI models like GPT-4o in bioscientific research, highlighting Los Alamos’s robust safety record. Additionally, an anonymous spokesperson informed Bloomberg that OpenAI has developed an internal scale to measure its large language models’ progress towards AGI.
These safety-focused announcements seem like a defensive response to growing criticism. OpenAI finds itself under intense scrutiny, and mere public relations efforts won’t ensure societal safety. The potential impact on the broader public if OpenAI fails to implement stringent safety protocols remains a pressing concern. Most people have no influence over the development of private AGI, yet they have no choice in how protected they will be from its consequences.
“AI tools can be revolutionary,” FTC chair Lina Khan told Bloomberg in November. However, she expressed concerns that “the critical inputs of these tools are controlled by a relatively small number of companies.”
If the numerous safety protocol criticisms hold true, serious questions arise about OpenAI’s suitability to steward AGI, a role it has assumed for itself. Allowing one group in San Francisco to control potentially transformative technology raises significant concerns, as it amplifies the urgent need for transparency and stringent safety measures.