U.S.-based artificial intelligence company Anthropic is in discussions with the European Commission regarding its various AI models, including cybersecurity-focused systems that are not yet available in the European Union.
The Commission confirmed the talks on Friday, noting that Anthropic has already committed to comply with the EU’s general-purpose artificial intelligence code of practice as regulatory frameworks continue to evolve across the bloc.
European Commission spokesperson Thomas Regnier said the engagement forms part of ongoing efforts to ensure AI developers adhere to risk assessment and mitigation obligations, even for services that may not yet be deployed in the European market.
“In this framework, there is an obligation to assess and mitigate risks that could come from a service that may or may not be offered in Europe,” Regnier told reporters in Brussels.
The discussions highlight growing coordination between major AI developers and European regulators as the EU advances its approach to overseeing general-purpose AI systems and high-risk applications, particularly in areas such as cybersecurity.
Anthropic, known for its Claude AI models, has positioned itself as one of the leading U.S. AI firms focused on safety-oriented development and regulatory cooperation, alongside other major players in the global AI industry.
The engagement with EU authorities comes amid increasing scrutiny of advanced AI systems, with regulators seeking to balance innovation with safeguards against potential misuse, particularly in sensitive domains like cyber defense and data security.
Further details on timelines for approval or deployment of Anthropic’s cybersecurity-related models in the EU have not yet been disclosed.






