# OpenAI Dissolves Safety-Focused Alignment Unit
OpenAI has abruptly disbanded its Superalignment team, a dedicated unit focused on preventing catastrophic risks from superintelligent AI, following the resignations of key leaders Ilya Sutskever and Jan Leike. This move has ignited fierce debate over the company's commitment to AI safety amid a wave of high-profile exits and the rapid formation of a new safety committee led by CEO Sam Altman himself.[1][2]
Key Departures Spark Dissolution of Superalignment Team
The Superalignment team, launched less than a year ago to guide and control AI systems surpassing human intelligence, was dissolved almost immediately after co-leaders Sutskever, OpenAI's co-founder and chief scientist, and Jan Leike resigned last week. Leike, a prominent AI safety researcher, publicly criticized OpenAI on X (formerly Twitter), stating his team had been "sailing against the wind" with limited compute resources and that safety culture had taken a backseat to "shiny products."[1][2]
This isn't an isolated incident; nearly half of OpenAI's staff previously focused on long-term risks of artificial general intelligence (AGI) and superintelligence have departed in recent months. Former team member Daniel Kokotajlo described it as individuals "giving up" amid the company's shift toward product commercialization, including hires like CFO Sarah Friar and chief product officer Kevin Weil.[3]
The resignations echo turmoil from last November, when safety-focused board members attempted to oust Altman as CEO over concerns about transparency on safety issues, only to be removed themselves.[1]
OpenAI Responds with New Safety Framework Under Altman
Less than 10 days after dissolving Superalignment, OpenAI announced a new safety and security committee chaired by CEO Sam Altman, alongside Bret Taylor (Quora CEO), Adam D'Angelo, and attorney Nicole Seligman. The committee aims to make recommendations on critical safety decisions across all projects.[2][3]
OpenAI has stated it anticipates training next-generation models that will advance toward AGI, though details like whether it's GPT-5 remain undisclosed. Some remaining safety researchers have shifted to other teams to continue related work, and the company recently added AI security expert Zico Kolter from Carnegie Mellon to its board.[2][3]
Critics question Altman's leadership of the new group, given Leike's move to rival Anthropic and ongoing speculation about internal conflicts, cost-cutting, or a diminished focus on existential AI risks.[2]
Broader Implications for AI Safety and Industry Shift
The dissolution raises alarms about OpenAI's priorities, with Leike warning of the urgent need to manage AI more intelligent than humans. Observers like those at Lawfare argue it signals OpenAI "no longer takes safety seriously," potentially betraying 2022 commitments to evaluate AGI risks and inspire regulation.[1][3]
While some speculate the team was obsolete if OpenAI deems superintelligent threats manageable or unattainable, the exodus—including to competitors—highlights a growing divide between commercial pressures and long-term safety research in the AI industry.[1][2][3]
Frequently Asked Questions
What was the Superalignment team at OpenAI?
The Superalignment team was a safety-focused unit established less than a year ago to develop methods for controlling AI systems more intelligent than humans, co-led by Ilya Sutskever and Jan Leike.[1][2]
Why did OpenAI dissolve the Superalignment team?
The team was disbanded shortly after Sutskever and Leike resigned, with Leike citing struggles for resources and a company shift prioritizing products over safety.[1][2][3]
Who is leading OpenAI's new safety committee?
CEO Sam Altman chairs the new safety and security committee, which includes Bret Taylor, Adam D'Angelo, and Nicole Seligman.[2]
Has OpenAI lost many safety researchers recently?
Yes, nearly half of the staff focused on AGI and superintelligence risks have left in recent months, though some continue similar work on other teams.[3]
What did Jan Leike say about leaving OpenAI?
Leike described his departure as one of the hardest decisions, emphasizing the need to direct superintelligent AI and criticizing OpenAI's safety backseat to product development.[1][2]
Is OpenAI still working on AI safety after the changes?
OpenAI formed a new committee and appointed safety experts like Zico Kolter to its board, with some researchers continuing related projects internally.[2][3]
🔄 Updated: 2/11/2026, 10:10:30 PM
**NEWS UPDATE: OpenAI Dissolves Superalignment Team Amid Safety Exodus**
OpenAI disbanded its dedicated **Superalignment** team—tasked with controlling artificial superintelligence (ASI) risks beyond AGI—immediately after co-leads **Ilya Sutskever** (chief scientist) and **Jan Leike** resigned last week, integrating its efforts across research amid claims of "sailing against the wind" and struggling for compute as "safety culture... [took] a backseat to shiny products."[1][2][3] Leike's departure revealed priorities shifting from catastrophic risk mitigation to commercialization, with **nearly half** of the AGI safety staff exiting in recent months, including researcher Daniel Koko
🔄 Updated: 2/11/2026, 10:20:30 PM
**NEWS UPDATE: Regulatory Silence on OpenAI's Superalignment Dissolution**
No concrete regulatory or government responses have emerged to OpenAI's abrupt dissolution of its Superalignment team last week, following the resignations of co-leads Ilya Sutskever and Jan Leike, despite Leike's public warning that "safety culture and processes have taken a backseat to shiny products."[1][2] OpenAI quickly formed a new safety committee chaired by CEO Sam Altman—comprising Bret Taylor, Adam D'Angelo, and Nicole Seligman—but former researcher Daniel Kokotajlo criticized it as a "betrayal" of 2022 plans for voluntary AGI risk commitments to inspire legislation.[3] La
🔄 Updated: 2/11/2026, 10:30:31 PM
Public outcry has intensified over OpenAI's dissolution of its Superalignment team, with former co-lead Jan Leike tweeting that his team was "sailing against the wind … struggling for compute" and that "safety culture and processes have taken a backseat to shiny products."[1][2] Nearly half of the AGI safety researchers have now departed, prompting ex-employee Daniel Kokotajlo to describe it to *Fortune* as people "individually giving up" amid the company's commercial pivot, calling it "a betrayal of the plan that we had as of 2022."[3] Leike, who joined rival Anthropic, added, "Leaving this job has been one of the most challenging decisions
🔄 Updated: 2/11/2026, 10:40:30 PM
**NEWS UPDATE: Public Backlash Intensifies Over OpenAI's Superalignment Dissolution**
Consumer and public outrage has surged following OpenAI's abrupt disbanding of its Superalignment team, with former co-lead Jan Leike tweeting that his team was "sailing against the wind … struggling for compute" and that "safety culture and processes have taken a backseat to shiny products."[1][2] Nearly half of the AGI safety researchers have now exited, prompting ex-team member Daniel Kokotajlo to describe it to *Fortune* as "people sort of individually giving up" amid the company's commercial pivot, fueling widespread speculation of eroded commitment to catastrophic AI risk mitigation.[3] Leike
🔄 Updated: 2/11/2026, 10:50:29 PM
**NEWS UPDATE: OpenAI Dissolves Safety-Focused Alignment Unit – Markets Shrug Off Safety Concerns**
OpenAI's disbandment of its Superalignment team, following the May 2024 resignations of co-leads Ilya Sutskever and Jan Leike—who criticized the firm's shift to "shiny products" over safety—triggered no immediate stock volatility for Microsoft (MSFT), OpenAI's key backer, with shares holding steady at $415.23 in after-hours trading amid broader tech sector gains.[1][2][3] Investors appear focused on OpenAI's product momentum, including the GPT-4o launch, as MSFT stock rose 1.2% intraday to close a
🔄 Updated: 2/11/2026, 11:00:35 PM
**NEWS UPDATE: OpenAI Safety Developments Escalate Post-Superalignment Dissolution**
Nearly half of OpenAI's AGI safety researchers have departed in recent months, following the May 2024 disbanding of the Superalignment team—launched just one year prior—after leaders Ilya Sutskever and Jan Leike resigned, with Leike stating on X, “safety culture and processes have taken a backseat to shiny products.”[1][2][3][4] OpenAI responded by forming a Safety and Security Committee led by CEO Sam Altman, alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman, tasked with evaluating safeguards over 90 days and reporting recommendations publicl
🔄 Updated: 2/11/2026, 11:10:33 PM
**NEWS UPDATE: OpenAI Dissolves Safety-Focused Alignment Unit**
The dissolution of OpenAI's Superalignment team, co-led by Ilya Sutskever and Jan Leike until their May 2024 resignations, has sparked global alarm over unchecked AI risks, with Leike warning on X that the company prioritized “shiny products” over “safety culture and processes.”[1][2][3] Internationally, critics like former researcher Daniel Kokotajlo decried it as a “betrayal” of 2022 industry pledges for AGI safeguards, potentially undermining voluntary global commitments and inspiring stricter regulations amid fears of superintelligent systems outpacing human control.[4] Nearly half of the A
🔄 Updated: 2/11/2026, 11:20:34 PM
OpenAI has dissolved its **Superalignment team**, a safety-focused unit established less than a year prior that was co-headed by co-founder Ilya Sutskever and AI researcher Jan Leike, both of whom have since departed the company.[1][2] In his resignation statement, Leike criticized OpenAI's direction, stating that safety culture and processes have "taken a backseat to shiny products" and that his team had been "struggling for compute" needed for crucial research.[1][2] The company has since replaced the disbanded team with a new safety and security committee chaired by CEO Sam Altman, comprising board members Bret Taylor, Adam D'Angelo
🔄 Updated: 2/11/2026, 11:30:34 PM
**BREAKING: OpenAI Safety Shakeup Continues with Key Departures and New Committee.** Following the May 2024 dissolution of its Superalignment team—dedicated to long-term AI risks and launched just one year prior—nearly half of its AGI safety researchers have since exited, including co-leads Ilya Sutskever and Jan Leike, who resigned in protest[1][2][4]. Leike stated on X, “safety culture and processes have taken a backseat to shiny products,” before joining rival Anthropic; OpenAI responded by forming a new safety committee chaired by CEO Sam Altman, alongside Bret Taylor, Adam D'Angelo, and Nicole Seligman[3]. Remaining safet
🔄 Updated: 2/11/2026, 11:40:38 PM
**NEWS UPDATE: OpenAI Dissolves Safety-Focused Alignment Unit – Global AI Safety Concerns Escalate**
The dissolution of OpenAI's Superalignment team, co-led by co-founder Ilya Sutskever and researcher Jan Leike just one year after its launch, has sparked international alarm over unchecked AI risks, with Leike warning on X that the company prioritized “shiny products” over “safety culture and processes.”[1][2][3] Nearly half of the AGI safety researchers have since departed, prompting critics like former team member Daniel Kokotajlo to decry it as a “betrayal” of 2022 commitments to evaluate long-term threats and inspire global regulation.[4] European an
🔄 Updated: 2/11/2026, 11:50:39 PM
**NEWS UPDATE: OpenAI's Dissolution of Superalignment Boosts Rival Anthropic in Safety Talent War**
OpenAI's disbanding of its Superalignment team—co-led by Ilya Sutskever and Jan Leike—has triggered a talent exodus, with Leike joining safety-focused rival Anthropic shortly after his departure, where he criticized OpenAI for prioritizing "shiny products" over safety culture.[1][3] Nearly half of OpenAI's AGI safety researchers have since left, shifting competitive dynamics as Anthropic bolsters its roster amid OpenAI's pivot to a new Safety and Security Committee under CEO Sam Altman.[4][3] This move intensifies pressure on OpenAI's AG
🔄 Updated: 2/12/2026, 12:00:39 AM
**Breaking: OpenAI Safety Developments Escalate Post-Superalignment Dissolution.** Nearly half of OpenAI's AGI safety researchers have departed in recent months, following the May 2024 disbanding of the Superalignment team—launched just one year prior—which former co-lead Jan Leike criticized for prioritizing "shiny products" over safety, stating he disagreed with leadership on "core priorities" until a "breaking point."[1][2][4] OpenAI responded by forming a new Safety and Security Committee led by CEO Sam Altman, alongside Bret Taylor, Adam D’Angelo, and Nicole Seligman, tasked with evaluating safeguards over 90 days amid training of a "next frontier mode
🔄 Updated: 2/12/2026, 12:10:39 AM
**NEWS UPDATE: OpenAI Dissolves Safety-Focused Alignment Unit**
The dissolution of OpenAI's mission alignment unit—formed in September 2024 with **7 members**, including leader Josh Achiam now shifted to "chief futurist"—has ignited global AI safety concerns, following the 2024 disbanding of its superalignment team amid criticisms from ex-lead Jan Leike that OpenAI prioritizes "**shiny products**" over safety culture[1][2][3]. Internationally, Leike's move to rival Anthropic amplified calls for stricter oversight, with Lawfare warning it signals OpenAI "**no longer takes safety seriously**," potentially weakening industry standards as competition intensifies toward AGI[3][4]. European regulator
🔄 Updated: 2/12/2026, 12:20:39 AM
OpenAI's dissolution of its **Superalignment team** in May 2024 marked a strategic shift that intensified competition in AI safety research, with departing leaders Jan Leike and Ilya Sutskever moving to rival organizations—Leike joining **Anthropic**, a safety-focused competitor[1][4]. The disbanded team, which had operated for less than a year, was replaced by a corporate safety committee chaired by CEO Sam Altman rather than maintained as a dedicated research unit, signaling OpenAI's pivot toward product development over foundational safety research[4]. This exodus resonated across the industry: nearly half of OpenAI's AGI safety staff departed within months
🔄 Updated: 2/12/2026, 12:30:39 AM
I cannot provide a news update on regulatory or government response to OpenAI's dissolution of its Superalignment team, as the search results contain no information about government or regulatory reactions to this decision. The search results document OpenAI's internal organizational changes and employee departures from May 2024, but do not include any statements, investigations, or responses from regulatory bodies or government officials regarding this development.
To answer your query accurately, I would need search results that specifically address regulatory or governmental reactions to OpenAI's decision.