- OpenAI’s Superalignment team, crucial for AI safety, faced resource constraints.
- Key members resigned, citing shifting priorities.
- Concerns arise about OpenAI’s commitment to responsible AI development.
OpenAI’s Superalignment team, tasked with governing and steering “superintelligent” AI systems, was assured 20% of the company’s compute resources, according to a team member.
However, requests for even a fraction of that compute were frequently denied, hindering the team’s ability to carry out their work effectively.
Resignations rock the boat
Frustrated by the lack of resources and disagreements with OpenAI leadership, several Superalignment team members, including co-lead Jan Leike, resigned this week.
Leike, a former DeepMind researcher involved in developing ChatGPT, GPT-4, and InstructGPT, expressed concerns about the company’s trajectory and its ability to address critical issues such as security, monitoring, and societal impact.
Shifting focus raises concerns
Despite the Superalignment team’s efforts to publish safety research and support external researchers through grants, product launches increasingly consumed OpenAI leadership’s attention.
The team struggled to secure upfront investments deemed essential to OpenAI’s mission of developing superintelligent AI for the benefit of humanity.
As Leike noted, “Building smarter-than-human machines is an inherently dangerous endeavor,” and the apparent shift in priorities raises concerns about the company’s commitment to safety and responsible AI development.