- OpenAI co-founder launches safety-focused AI firm.
- Safe Superintelligence prioritizes secure systems over rapid advancement.
- This contrasts OpenAI’s potential shift toward profit-driven development.
Ilya Sutskever, a co-founder of OpenAI, has launched a new AI company called Safe Superintelligence.
The firm’s mission is to develop advanced AI systems that prioritize safety over rapid advancement.
Safety first, profits second
Sutskever has partnered Daniel Gross, former Apple AI head, and Daniel Levy, an ex-OpenAI researcher, to lead this ambitious project.
Safe Superintelligence aims to differentiate itself by focusing solely on creating secure AI systems without the distractions of management overhead or product cycles.
This approach stands in stark contrast to recent reports suggesting OpenAI may be considering a shift towards a purely for-profit model.
A tale of two AI philosophies
The timing of Sutskever’s new venture is noteworthy, coming shortly after his departure from OpenAI in May.
It follows a tumultuous period at his former company, including his involvement in the brief ousting of CEO Sam Altman.
This new chapter in Sutskever’s career highlights the ongoing debate within the AI community about balancing innovation and safety.
To read the original article: https://www.techinasia.com/openai-cofounder-ilya-sutskever-launches-safe-ai-firm