Thursday, April 16, 2026
av1tvnews@gmail.com
AITech

OpenAI, Anthropic Restrict Release of Cybersecurity AI Models

New AI systems capable of uncovering software vulnerabilities are being limited to trusted partners amid growing concerns over cyber risks.

Telling African Stories One Voice at a time!

Artificial intelligence firm OpenAI has announced that it will release its latest cybersecurity-focused model to a limited number of partners, following similar restrictions by rival Anthropic on its newly developed system.

The controlled rollout reflects growing concerns in the tech industry about an emerging AI-driven “arms race” between cybersecurity defenders and potential attackers capable of using advanced tools to identify and exploit software vulnerabilities.

In a blog post, OpenAI said, “Our goal is to make these tools as widely available as possible while preventing misuse.”

The company confirmed that its new model, GPT-5.4-Cyber, will only be accessible to “the highest tiers” of users and organisations under its Trusted Access for Cyber (TAC) programme. The scheme reportedly includes thousands of verified cybersecurity professionals and hundreds of defence teams responsible for protecting critical software systems, though no specific partners were named.

Meanwhile, Anthropic recently limited access to its Claude Mythos model, offering it to just 40 major technology organisations under an initiative known as Project Glasswing. Despite not being specifically designed for cybersecurity, the model has reportedly impressed experts by uncovering long-standing vulnerabilities in widely used software systems—some of which had gone undetected for years.

The developments have drawn attention from financial and policy circles, with reports indicating that major U.S. bank executives recently met officials, including U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, to discuss potential risks posed by such systems to the financial sector.

According to OpenAI, its GPT-5.4-Cyber model is “trained to be cyber-permissive,” allowing defenders to test systems for vulnerabilities more effectively without excessive safeguards blocking legitimate security work.

Anthropic, on its part, said the strict access limits for Claude Mythos are intended to give defenders a head start in identifying and fixing vulnerabilities before malicious actors can exploit them.

OpenAI added that instead of centrally deciding who can access such tools, it aims to expand access to “legitimate defenders” through more automated and objective verification systems designed to reduce misuse risks while supporting cybersecurity innovation.

Telling African Stories One Voice at a time!
Victoria Emeto
the authorVictoria Emeto
A bright and self-driven graduate trainee at AV1 News, she brings fresh energy and curiosity to her role. With a strong academic background in Mass Communication, she has a solid foundation in storytelling, audience engagement, and media ethics. Her passion lies in the evolving media landscape, particularly how emerging technologies are reshaping content creation and distribution. She is already carving a niche for herself as a skilled journalist, honing her reporting, writing, and research abilities through hands-on experience. She actively explores the intersection of digital innovation and traditional journalism.

Leave a Reply