Departure of Co-Founder Leads to Dissolvement of AI Safety Team at OpenAI
The exit of Ilya Sutskever, co-founder of startup OpenAI, has contributed to the disbandment of the long-term risk management team related to deploying artificial intelligence (AI), CNBC reports. Established to safeguard humanity against health hazards rooted in the spread of AI, this singular unit did not last a year at OpenAI.
According to CNBC, Ilya Sutskever (pictured above), who led scientific research and inquiries at OpenAI, had long expressed concern about such risks. He also led a dedicated risk management team alongside Jan Leike. However, later in the week, Leike also left the company. The team, established last year and initially aimed to minimize risks related to the spread of AI technologies, was intended to have 20% of OpenAI’s computational resources for the next four years. After the leadership’s departure, the remaining team members were reassigned within the company.
In a recent statement, Jan Leike addressed his departure from OpenAI, citing that the “safety culture and processes have given way to shiny products.” OpenAI’s current CEO, Sam Altman, expressed regret over Leike’s resignation. Leike has stated that he has long disagreed with the management’s priorities, and that these disagreements have intensified. He believes that OpenAI should pay more attention to the safety and societal impact of the technologies it develops. According to Leike, the specialist team within OpenAI has been “sailing against the wind” for the past few months, finding it increasingly difficult to achieve their set objectives due to limited resources. Leike remains convinced that, in the development of generative AI, OpenAI should prioritize safety, stating “creating machines that outperform humans is a very dangerous endeavor.”
As Bloomberg elucidates, OpenAI intends to retain specialists who oversee AI safety within its staff, spreading them across various departments. These safety-oriented duties will also be assigned to independent startup divisions, preventing a completely uncontrolled development of AI under the new arrangements.
This post was last modified on 05/18/2024