OpenAI’s Mission Shift: Safety Concerns Rise as For-Profit Structure Takes Hold
The way OpenAI, the company behind the groundbreaking ChatGPT, frames its core objectives has undergone a significant transformation. With a recent restructuring into a for-profit entity, the explicit mention of “safety” has been notably absent from its mission statement. This change, coupled with the inclusion of investors who stand to gain directly from the company’s profits, is raising eyebrows and prompting concerns that the pursuit of financial gain might now overshadow the critical imperative of AI safety.
The latest IRS disclosure form for OpenAI, filed in November 2025 and covering the 2024 financial year, reveals a revised mission statement: “OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.” This contrasts sharply with previous iterations, where the word “safely” was a consistent and prominent element. This alteration comes at a pivotal moment, as OpenAI has ceded nearly three-quarters of its nonprofit control to private investors and employees, marking its last filing under a tax-exempt status.
This removal of safety language has not gone unnoticed by watchdog groups and academic scholars. Alnoor Ebrahim, a professor at Tufts University’s Fletcher School and an expert in nonprofit accountability, was among the first to highlight this shift. He has voiced significant concerns, warning of a potentially precarious future for a company already grappling with escalating safety-related issues.
OpenAI and its CEO, Sam Altman, have faced considerable scrutiny and legal challenges, including lawsuits alleging negligence, assisted suicide, involuntary manslaughter, wrongful death, and various other product liability claims. For Ebrahim, the revised mission statement serves as clear evidence of a strategic decision to potentially deprioritise safety in favour of boosting the company’s financial performance.
“I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm,” Ebrahim remarked.
The Evolution of OpenAI: From Nonprofit Idealism to For-Profit Realities
OpenAI’s journey began with a unique model: a nonprofit entity overseeing a for-profit subsidiary. However, this structure evolved dramatically in late 2024 when the company announced a substantial new funding round of $6.6 billion from investors. This influx of capital came with a critical condition: it would convert to debt unless OpenAI transitioned into a more conventional for-profit technology company, managed by investors who could hold uncapped profit shares.
“In my view, these changes explicitly signal that OpenAI is making its profits a higher priority than the safety of its products,” Ebrahim elaborated.
Navigating the Conflict: Profits vs. Nonprofit Principles
The core difference between traditional charitable nonprofits and for-profit entities lies in the distribution of earnings. Board members of tax-exempt nonprofits are prohibited from receiving a share of the organisation’s profits. While the rules can become complex when a nonprofit owns a for-profit arm, as is the case with OpenAI, investors typically enrich themselves from profits without directly sitting on the board or influencing board member elections due to potential conflicts of interest.
In OpenAI’s recent restructuring, the OpenAI Foundation, the nonprofit arm, now holds only a 26% stake in the OpenAI Group, having relinquished 74% control. Following a significant $13.8 billion investment, Microsoft has emerged as a major shareholder with a 27% ownership, while OpenAI employees and other investors hold the remaining shares.
A History of Shifting Missions
Since its inception as a nonprofit scientific research lab in 2015, OpenAI has filed its Form 990 nine times. During this period, the company has modified its mission statement on six separate occasions. The most recent filing, in 2025, marks the first time all explicit references to safety have been removed. OpenAI has previously addressed these mission statement adjustments.
When announcing the restructuring, OpenAI stated: “We rephrased our mission to ‘ensure that artificial general intelligence benefits all of humanity’ and planned to achieve it ‘primarily by attempting to build safe AGI and share the benefits with the world.’ The words and approach changed to serve the same goal—benefiting humanity.”
Despite this, OpenAI continues to employ safety-related language in its online communications. For instance, their website states: “We view this mission as the most important challenge of our time. It requires simultaneously advancing AI’s capability, safety, and positive impact in the world.”
However, Ebrahim remains unconvinced, noting that the continued omission of safety from the core mission statements of both the foundation and the OpenAI group makes accountability challenging. “Given that neither the mission of the foundation nor of the OpenAI group explicitly alludes to safety, it will be hard to hold their boards accountable for it,” he explained.
The Evolving Language of OpenAI’s Mission Statements: A Timeline
A review of OpenAI’s past IRS filings reveals a consistent evolution in their stated mission:
2016 and 2017: “OpenAI’s goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible. We’re trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.”
2018 and 2019: “OpenAI’s goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible.” We’re trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.
2020: “OpenAI’s goal is to advance digital intelligence in the way that is most likely to benefit humanity, as a whole, unconstrained by a need to generate financial return. We thinkOpenAI believes that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible.”
2021: “OpenAI’s goal mission is to advance digital intelligence in the way that is most likely to benefit build general-purpose artificial intelligence that benefits humanity, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology will help shape the 21st century and want to help the world build has the potential to have a profound, positive impact on the world, so the company’s goal is to develop and responsibly deploy safe AI technology and ensure that AI’s benefits are, ensuring that its benefits are as widely and evenly distributed as possible.”
2022 and 2023: “OpenAI’s mission is to build general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology has the potential to have a profound, positive impact on the world, so the companys our goal is to develop and responsibly deploy safe AI technology, ensuring that its benefits are as widely and evenly distributed as possible.”
2024: “OpenAI’s mission is to build general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology has the potential to have a profound, positive impact on the world, so our goal is to develop and responsibly deploy safe AI technology, ensuring that its benefits are as widely and evenly distributed as possible ensure that artificial general intelligence benefits all of humanity.“






