
As Sam Altman faces a high-profile legal battle with Elon Musk that could have serious implications for OpenAI’s future, he’s also trying to steer the company back to its original purpose — building artificial intelligence that benefits everyone, not just a select few.
In a recent blog post, Altman laid out an ambitious vision. He described a future where artificial intelligence unlocks human potential on a scale hard to imagine today, allowing people to have more agency, more opportunity, and lead more meaningful lives. Ideas that once belonged to science fiction may soon become reality, he says.
“We envision a world marked by widespread flourishing on a scale that is difficult to fully grasp today, a world where individual potential, agency, and fulfillment are greatly enhanced. Many of the ideas we have only explored in science fiction could become reality, and most people could lead more meaningful lives than is currently possible,” Altman wrote in a blog post.
Today’s large language models (LLMs), including those behind ChatGPT and Grok, remain largely limited to narrower options or depend on different models tailored to specific use cases. General AI, on the other hand, is generally understood as artificial intelligence that can perform a wide range of cognitive tasks at or beyond the level of human capabilities. Although OpenAI has been promoting AGI since its 2018 charter, the precise definition of the term has become increasingly fluid over time.
OpenAI’s main principles for AGI
OpenAI has outlined the following five principles to guide society on its journey to AGI:
– democratization: To resist the consolidation of AI in the hands of a few companies, OpenAI said it will work to ensure that key AI decisions are made through democratic processes and egalitarian principles, and not just in AI labs.
– Authorization: OpenAI said it will work to ensure users can reliably use its AI products and tools for increasingly valuable tasks. It also emphasized the need to build and deploy its AI products in ways that minimize catastrophic and local damage, as well as “potentially corrosive societal effects,” even if that means erring on the side of caution and easing restrictions only after sufficient evidence is gathered.
– Universal Prosperity: While OpenAI said it wants to put easy-to-use artificial intelligence systems with significant computing power in everyone’s hands, the company noted that governments need to “consider new economic models to ensure everyone can participate in value creation.” He also suggested that her belief in general prosperity justifies her push to build artificial intelligence infrastructure and invest heavily in computing despite relatively modest revenues.
– Resistance: OpenAI said it will work with other companies, governments and civil society to address new risks posed by AI, such as systems that could facilitate the creation of pathogens or pathogens with advanced cybersecurity capabilities. “We expect that there will be times when we will need to work with governments, international agencies and other AGI efforts to ensure that we have adequately addressed serious compliance, security or societal issues before continuing our work,” the company said.
– Adaptability: OpenAI has pledged to be more transparent about when, how and why its operating principles change, saying its initial concerns about releasing GPT-2 weights under an open-source license were misplaced as it led to an iterative deployment strategy.
Is AGI Losing Its Meaning?
It is increasingly easier to discuss the controversies surrounding AGI than to clearly define what the term actually means. For example, OpenAI’s interpretation of AGI is at the heart of the allegations Elon Musk leveled against the company in his lawsuit. He claims OpenAI and its leadership have strayed from the organization’s original non-profit mission, a vision he says helped fund ensuring AGI benefits humanity as a whole.
The closely watched trial is now underway, with opening arguments beginning Tuesday, April 28, in U.S. District Court in Oakland.
At the same time, OpenAI’s relationship with Microsoft, one of its early backers, appears to be developing. Recent changes to their agreement removed a clause that previously gave the Windows maker exclusive access to OpenAI models. The updated agreement also repeals the earlier AGI clause, which defined AGI as “a highly autonomous system that outperforms humans in most economically viable work.”
Previously, OpenAI said it would set up an independent panel of experts to formally declare when AGI will be achieved, at which point Microsoft’s special access will end. The revised terms now indicate that Microsoft will continue to acquire a stake in OpenAI’s business even if AGI is declared by 2030.
Speaking on the sidelines of a summit on the impact of artificial intelligence in New Delhi earlier this year, Altman suggested that the goalposts themselves are shifting. “AGI feels pretty close at this point. If you asked most people six years ago if systems could independently do research or write code, they would already sound highly intelligent and broadly capable,” he said. He added that ASI, or artificial superintelligence, may be just a few years away.
Taken together, these shifts, whether legal, commercial, or technological, underscore a broader reality: AGI is no longer a fixed milestone with a stable definition. Instead, it is increasingly shaped by context, incentives, and rapid advances, making the term more fluid and arguably more ambiguous than ever before.





