
Artificial intelligence company Anthropic said this month that it will only share its latest AI technology with a small number of partners due to cybersecurity concerns.
On Thursday, Anthropic’s main rival, OpenAI, took a different approach. The company unveiled a new flagship AI model, GPT-5.5, and began sharing the technology with the hundreds of millions of people who use ChatGPT, its online chatbot.
The companies’ contrasting strategies make it clear that Anthropic and OpenAI disagree on how they should handle a technology that is increasingly useful to people trying to defend computer networks, as well as those trying to break into those networks.
But OpenAI is not throwing caution to the wind. The company said it is not yet releasing the technology as an application programming interface, or API, that would allow companies and individuals to fold the technology into their own software applications and other tools. This will give OpenAI more time to study security issues in the new system.
In a blog post, OpenAI described the new model as a significant upgrade over the systems that previously powered ChatGPT, adding that the new technology was better at writing computer code and performing tasks related to other office work.
Code generation is becoming an increasingly important skill for AI systems, including technologies from giants like Google and smaller companies like OpenAI and Anthropic.
AI code generation can accelerate software development. It also enables systems like GPT-5.5 to act as AI agents – personal digital assistants that can use other software applications on behalf of office workers, including spreadsheets, online calendars and email services.
As artificial intelligence systems have gotten better at writing computer code, they’ve also gotten better at identifying security vulnerabilities in software—a skill that is fundamentally changing cybersecurity.
This month, Anthropic limited the release of its latest Claude Mythos technology to around 40 companies and organizations that maintain critical infrastructure, including Apple, Amazon, Microsoft and Google. Anthropic said this approach will allow these organizations to fix security holes before malicious hackers can exploit them.
Some cybersecurity experts have questioned the approach, saying Anthropic doesn’t allow all companies, government agencies and other organizations to understand what the technology can do and immediately use it to protect their computer networks.
If the technology is not widely distributed from the start, it will ultimately pose a greater security risk, as fewer organizations will be able to defend themselves with the most powerful systems.
About a week after Anthropic revealed Claude Mythos, OpenAI also said it would only share the new AI system with a group of trusted partners. But OpenAI shared the technology, GPT-5.4-Cyber, with a much larger group than Anthropic, which included independent cybersecurity professionals and other experts.
OpenAI said it will distribute the technology to hundreds of organizations before expanding the release to thousands more partners in the coming weeks. It also said it would work to verify the identity of users to prevent abuse.
Now OpenAI has publicly released the more powerful GPT-5.5. However, it added guardrails to GPT-5.5 aimed at preventing people from using the technology for cybersecurity tasks. With GPT-5.4-Cyber, it has dropped these barriers so that trusted cyber security professionals can work with the entire system.
However, the latest OpenAI technology is not as powerful as Anthropic’s Claude Mythos, according to benchmarks conducted by Vals AI, a company that tracks the performance of the latest AI technologies.
(The New York Times sued OpenAI and its partner, Microsoft, accusing them of infringing news content related to AI systems. OpenAI and Microsoft have denied the claims.)





