Artificial intelligence firm OpenAI has announced that it will release its latest cybersecurity-focused model to a limited number of partners, following similar restrictions by rival Anthropic on its newly developed system.
The controlled rollout reflects growing concerns in the tech industry about an emerging AI-driven “arms race” between cybersecurity defenders and potential attackers capable of using advanced tools to identify and exploit software vulnerabilities.
In a blog post, OpenAI said, “Our goal is to make these tools as widely available as possible while preventing misuse.”
The company confirmed that its new model, GPT-5.4-Cyber, will only be accessible to “the highest tiers” of users and organisations under its Trusted Access for Cyber (TAC) programme. The scheme reportedly includes thousands of verified cybersecurity professionals and hundreds of defence teams responsible for protecting critical software systems, though no specific partners were named.
Meanwhile, Anthropic recently limited access to its Claude Mythos model, offering it to just 40 major technology organisations under an initiative known as Project Glasswing. Despite not being specifically designed for cybersecurity, the model has reportedly impressed experts by uncovering long-standing vulnerabilities in widely used software systems—some of which had gone undetected for years.
The developments have drawn attention from financial and policy circles, with reports indicating that major U.S. bank executives recently met officials, including U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, to discuss potential risks posed by such systems to the financial sector.
According to OpenAI, its GPT-5.4-Cyber model is “trained to be cyber-permissive,” allowing defenders to test systems for vulnerabilities more effectively without excessive safeguards blocking legitimate security work.
Anthropic, on its part, said the strict access limits for Claude Mythos are intended to give defenders a head start in identifying and fixing vulnerabilities before malicious actors can exploit them.
OpenAI added that instead of centrally deciding who can access such tools, it aims to expand access to “legitimate defenders” through more automated and objective verification systems designed to reduce misuse risks while supporting cybersecurity innovation.






