On the evening of November 17, 2023, Sam Altman sat in a conference room at OpenAI's San Francisco headquarters and learned he was fired. The board of the nonprofit that governed the most valuable AI company on Earth had voted to remove him — no press release, no transition plan, no warning. Within 72 hours, 95 percent of OpenAI's 770 employees had signed a letter threatening to follow him out the door. Five days later, he was back as CEO. The board that fired him was gone.[1]
It was the most dramatic corporate power struggle in Silicon Valley since Steve Jobs got bounced from Apple. But underneath the palace intrigue was a question nobody in the industry wanted to answer: what happens when the company building the most powerful technology in human history is accountable to no one?
The Charter
OpenAI was founded in December 2015 as a nonprofit research laboratory. The founding donors — Elon Musk, Sam Altman, Peter Thiel, Reid Hoffman, and others — pledged over $1 billion to build artificial general intelligence “in a way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”[2] The charter was explicit: if another organization got close to AGI first, OpenAI would help them rather than compete. The mission was safety, not market share.
That idealism lasted about three years. By 2019, the economics of large-scale AI research had made the nonprofit model untenable. Training GPT-2 cost an estimated $50,000. GPT-3, released the following year, cost around $4.6 million. GPT-4, by most credible estimates, ran north of $100 million.[3] Nonprofits don't raise that kind of capital. So OpenAI created a “capped profit” subsidiary — a for-profit shell that could attract investors while the nonprofit board retained ultimate control. The cap was set at 100x returns for early investors, a number that sounded modest until you did the math.
Microsoft Writes the Check
In 2019, Microsoft invested $1 billion in OpenAI's new for-profit arm. In January 2023, it came back with $10 billion more, in what Bloomberg called “the largest investment in AI history.”[4] The deal gave Microsoft exclusive cloud-computing rights, a 49 percent profit share, and deep integration of OpenAI's models into every Microsoft product from Office to Azure to Bing. In exchange, OpenAI got the computing power it needed — and a partner whose commercial interests would become inseparable from its own.
“The mission of OpenAI is to ensure that artificial general intelligence benefits all of humanity.”
— OpenAI Charter, 2018
By the time ChatGPT launched in November 2022, the tension between that charter and the commercial reality was already grotesque. OpenAI was racing to ship products, signing enterprise deals, and building a consumer subscription business — all while nominally governed by a nonprofit board whose job was to slow things down if safety demanded it. The board had one real power: it could fire the CEO. In November 2023, it tried.
The Coup and the Counter-Coup
The board's stated reason for removing Altman was that he had not been “consistently candid in his communications.” The real dynamics were murkier. Board member Helen Toner had co-authored an academic paper that appeared to praise a competitor's safety practices over OpenAI's — and Altman had tried to get her removed for it. Chief Scientist Ilya Sutskever, who initially sided with the board, reversed himself within days and signed the employee letter demanding Altman's return.[5]
Microsoft CEO Satya Nadella made the stakes clear in a television interview the same weekend: if Altman didn't come back, Microsoft was prepared to hire him and anyone who wanted to follow.[6] The nonprofit board, in theory the guardian of humanity's interests, had no leverage against a trillion-dollar company that had already written a $13 billion check. Altman returned. The board was reconstituted with members more aligned with the company's commercial direction — including Bret Taylor, the former Salesforce co-CEO, and Larry Summers, the former Treasury Secretary.
The Safety Exodus
The firing and rehiring of Altman accelerated a departure of safety-focused researchers that had been building for months. Ilya Sutskever, OpenAI's co-founder and chief scientist, left in May 2024 to start a new company focused on “safe superintelligence.”[7] Jan Leike, co-lead of OpenAI's Superalignment team — the group tasked with ensuring future AI systems remain under human control — resigned the same month. His parting statement on X was blunt: “Over the past years, safety culture and processes have taken a backseat to shiny products.”[8]
The Superalignment team had been promised 20 percent of OpenAI's compute resources. According to Leike, it never received them. He and Sutskever had been the internal voices arguing that the company needed to slow down, that alignment research should precede capability research. With both of them gone, OpenAI's internal brake system effectively ceased to exist.
Musk v. Altman
In February 2024, Elon Musk filed a lawsuit against OpenAI and Sam Altman, alleging that the company had abandoned its founding nonprofit mission and become a “closed-source de facto subsidiary of the largest technology company in the world.”[9] The complaint cited internal emails in which Altman and other founders discussed keeping OpenAI's research open and independent from corporate influence. Musk withdrew the suit in June, then refiled an expanded version in August, adding claims of fraud and racketeering.
OpenAI responded by publishing its own batch of Musk's emails, which showed him proposing in 2018 that OpenAI merge with Tesla — with himself as CEO.[10] The sideshow of two billionaires litigating their egos in federal court obscured the substance of Musk's complaint, which was genuine: OpenAI had promised to be an open, nonprofit research lab, and it had become a closed, for-profit company valued at over $150 billion.
The For-Profit Conversion
In late 2024, OpenAI announced plans to convert fully to a for-profit benefit corporation, eliminating the nonprofit's control over the company entirely.[11] The nonprofit would remain as a separate entity, retaining a minority stake, but would lose its veto power over the company's direction. The capped-profit structure — the original compromise that was supposed to prevent investor interests from overriding safety — would be scrapped. Investors could now expect uncapped returns.
The California Attorney General opened an inquiry into whether the conversion violated charitable trust law. Legal scholars questioned whether billions in donated computing resources and tax-advantaged research could simply be folded into a private company.[12] OpenAI argued that the new structure was necessary to compete with Google, Anthropic, and xAI — the same argument it had made in 2019 when it created the capped-profit subsidiary, and would presumably make again whenever the next structural concession became convenient.
The Question Nobody Wants to Answer
Here is the trajectory: a nonprofit research lab, founded to ensure AGI benefits all of humanity, has become a company that answers to Microsoft, sovereign wealth funds, and venture capitalists expecting generational returns. Its safety team has been gutted. Its co-founders have scattered. Its charter is a historical document, not an operating principle.
OpenAI is now building systems that it claims are on the path to artificial general intelligence — technology that, by its own admission, could pose existential risks. And the entity making the deployment decisions is no longer a nonprofit with a mandate to be cautious. It's a company with a fiduciary duty to be profitable.
The nonprofit didn't fail because its mission was wrong. It failed because it was standing between enormously powerful technology and enormously powerful money, and no charter on Earth is strong enough to hold that line. OpenAI is proof of concept — not for artificial intelligence, but for the idea that safety and profit cannot coexist in the same corporate structure when the stakes are high enough. The nonprofit ate the world. Then the world ate the nonprofit.
Sources
- Cade Metz and Mike Isaac, OpenAI's Board Fired Sam Altman. Then Came the Chaos., The New York Times (Nov. 2023). nytimes.com
- OpenAI, Introducing OpenAI (Dec. 2015). openai.com
- Ars Technica, GPT-4 Details Revealed: Training Cost Exceeds $100 Million (Jul. 2023). arstechnica.com
- Dina Bass, Microsoft Invests $10 Billion in ChatGPT Maker OpenAI, Bloomberg (Jan. 2023). bloomberg.com
- Cade Metz et al., Inside the Crisis at OpenAI, The New York Times (Nov. 2023). nytimes.com
- CNBC, Satya Nadella Says Microsoft Is Committed to OpenAI Partnership (Nov. 2023). cnbc.com
- Cade Metz, Ilya Sutskever, OpenAI's Co-Founder, Leaves to Start New A.I. Company, The New York Times (Jun. 2024). nytimes.com
- Jan Leike, Post on X (formerly Twitter) (May 2024). x.com
- Elon Musk v. Sam Altman et al., Complaint, Superior Court of California, San Francisco (Feb. 2024). courthousenews.com
- OpenAI, OpenAI and Elon Musk (Mar. 2024). openai.com
- Erin Woo and Katie Roof, OpenAI Plans to Convert to For-Profit Company, The Information (Sep. 2024). theinformation.com
- Brian Fung, California AG Reviews OpenAI's Transition From Nonprofit, CNN (Oct. 2024). cnn.com