TL;DR
- Team Disbanded: OpenAI confirmed it disbanded its Mission Alignment team this week after just 16 months of operation.
- Leadership Change: Team leader Joshua Achiam will transition to a newly created chief futurist role with undefined responsibilities.
- Safety Exodus: The dissolution continues a pattern of safety-focused departures including the Superalignment team in May 2024.
- New Structure: OpenAI now employs a distributed safety model with specialists embedded within product teams rather than centralized oversight.
OpenAI confirmed it disbanded its Mission Alignment team this week, just 16 months after launching the group to ensure artificial general intelligence benefits all of humanity.
The San Francisco AI company transferred seven employees to other teams, according to a report by Platformer. Team members have been reassigned to other projects within OpenAI.
Achiam’s Pivot
Joshua Achiam, who led the Mission Alignment team, will transition to a newly created role as chief futurist. The position’s exact responsibilities remain undefined. Achiam was one of OpenAI’s original charter authors and has been a key voice on safety issues since the company’s founding.
He established himself as a prominent figure in OpenAI’s technical leadership, contributing to foundational research on reinforcement learning and AI alignment.
Creating a chief futurist role while dismantling the team Achiam led shifts safety oversight from operational authority to advisory influence.
This structural change may reduce the institutional weight of safety concerns in product decisions.
Born During Turmoil
Yet the Mission Alignment team’s origins were themselves entangled in organizational upheaval. The creation on September 25, 2024 coincided with then-CTO Mira Murati’s surprise departure announcement. Chief Research Officer Bob McGrew and VP of Research Barret Zoph also left that same day, creating a sudden vacuum in technical leadership.
The timing suggested the team would serve as a stabilizing mechanism during organizational upheaval. Zoph later returned to OpenAI after several months away.
The clustering of the team’s creation with executive departures, followed 16 months later by its dissolution, indicates that safety governance structures at OpenAI remain reactive rather than institutionalized.
A Pattern of Safety Exits
OpenAI dissolved its Superalignment team in May 2024, seven months before creating Mission Alignment. Co-lead Jan Leike resigned with a pointed departure statement.
“Safety culture and processes have taken a backseat to shiny products,” Leike wrote at the time, articulating concerns that would echo through subsequent departures.
Co-lead Ilya Sutskever, OpenAI’s co-founder and former chief scientist, left the same day after years at the company’s technical helm.
Mounting Departures
Leike’s warning proved prescient as additional safety-focused personnel followed. Miles Brundage, who led the AGI Readiness team, departed in October 2024 with his own assessment.
“Neither OpenAI nor any other frontier lab is ready,” Brundage concluded in his parting statement. Daniel Kokotajlo, another former researcher, later testified to Congress about his reasons for leaving.
“I resigned from OpenAI after losing confidence that the company would behave responsibly in its attempt to build artificial general intelligence — “AI systems that are generally smarter than humans.””
Daniel Kokotajlo, former AI researcher
The succession of safety-focused departures establishes a clear pattern. Researchers have been willing to sacrifice equity to voice safety concerns publicly.
The consistency of their messaging indicates structural tensions rather than isolated disputes. Multiple technical leaders concluded that commercial imperatives were systematically overriding safety protocols.
Commerce vs Caution
Former OpenAI co-founder and chief scientist Ilya Sutskever left OpenAI and later founded Safe Superintelligence, a new AI safety company. In a deposition taken in October 2025, Sutskever explained: “Ultimately, I had a big new vision. And it felt more suitable for a new company.”
Sutskever had previously authored detailed memos to OpenAI’s board alleging a pattern of lying by CEO Sam Altman. He recommended Altman’s termination in late 2023.
Steven Adler, another safety researcher who departed in mid-November, expressed being “pretty terrified by the pace of AI development” and characterized the AGI race as a “risky gamble.”
The exodus from safety-focused roles paints a picture of internal tension between commercial velocity and technical caution. Mission Alignment’s brief 16-month existence suggests that institutionalizing safety governance within a rapidly scaling commercial organization creates structural challenges. And public commitments alone cannot resolve these tensions.
OpenAI has adopted what it calls a distributed safety model, embedding safety considerations within product teams rather than maintaining a separate oversight group.
This approach aims to make safety thinking routine rather than exceptional. Critics question whether distributed responsibility dilutes accountability.
The company maintains pre-deployment testing protocols designed to catch potential harms before public release. The distributed model means each product team has embedded safety specialists who participate in design decisions from the earliest stages.
The transition from centralized safety teams to embedded specialists fundamentally alters the power dynamics of safety decisions.
Where Mission Alignment operated as a semi-independent authority, distributed safety specialists report to product leaders. Their performance metrics center on shipping velocity.
Industry Approaches Vary
This shift toward distributed responsibility contrasts sharply with approaches at competing labs. Unlike OpenAI’s disbanded Mission Alignment team, Google DeepMind has integrated safety researchers directly into core development teams from project inception.
Anthropic maintains a dedicated Constitutional AI safety team with explicit authority to influence product decisions. Both companies position their models as built with safety constraints from the ground up.
Industry standards for AI safety governance remain fluid as companies experiment with different organizational models. No consensus has emerged on whether centralized safety teams or distributed responsibility produces stronger outcomes.
Regulatory pressure is mounting globally for more formal accountability structures, though specific requirements vary by jurisdiction.
External Scrutiny Intensifies
As internal structures evolve, OpenAI faces multiple investigations from regulatory bodies examining its data practices and safety protocols.
The company is defending legal challenges over AI training data, with several cases alleging copyright infringement. Congressional committees have requested testimony on AI safety measures.
At the same time, International AI governance frameworks are being developed and rolled out. The European Union’s AI Act serves as one template for such regulatory oversight.
This means that OpenAI now has to navigate an increasingly complex environment of national and international requirements while maintaining rapid development pace.
The convergence of the dissolution of its Mission Alignment teamwith intensifying external scrutiny creates a perception gap. As regulators demand greater safety accountability, OpenAI is dismantling the organizational structures specifically designed to provide that oversight.
Nonprofit Origins Meet Commercial Reality
These structural tensions trace back to OpenAI’s evolving corporate identity. OpenAI announced plans in May 2025 to transition from its original nonprofit structure to a public benefit corporation.
The shift sparked debate about whether it would undermine the company’s founding mission. The company’s valuation has climbed into hundreds of billions as commercial partnerships with Microsoft and other technology giants have expanded.
“We want to deliver beneficial AGI. This includes contributing to the shape of safety and alignment”
Sam Altman, CEO (via OpenAI)
Altman’s May 2025 statement on structural evolution emphasized continued commitment to safety. He positioned the transition as enabling greater impact rather than compromising principles. The tension between OpenAI’s stated mission to ensure AGI benefits all of humanity and its increasingly commercial operations remains a subject of intense scrutiny.
The nonprofit-to-PBC transition fundamentally changes the calculus governing safety decisions. Where the original nonprofit structure subordinated commercial considerations to mission objectives, a public benefit corporation must balance stakeholder returns with public benefit.
What’s at Stake
The implications extend far beyond organizational charts. OpenAI is widely regarded as a leader in AGI development, with capabilities advancing rapidly across language, vision, and reasoning domains. The dissolution of its Mission Alignment team comes as the company faces mounting regulatory scrutiny worldwide.
Congressional hearings on AI safety are scheduled for the coming months. OpenAI executives are expected to testify about governance structures. The European Union’s AI Act implementation will require demonstrable safety oversight mechanisms by late 2026, forcing frontier labs to formalize their approaches.
For former safety researchers who departed OpenAI, the dissolution validates warnings that safety infrastructure was being subordinated to commercial imperatives.
These researchers now face a consequential choice: remain silent and preserve equity packages worth millions, or continue speaking publicly about safety concerns at personal financial cost.
As OpenAI’s systems approach capabilities that may soon rival human performance across widening cognitive domains, the Congressional hearings in coming months will test whether the company’s distributed safety model provides adequate accountability.
The ultimate measure will be whether OpenAI’s deployed AI systems cause preventable harm that dedicated safety teams might have caught before release.

