Technology

Anthropic Puts Brakes on Powerful Claude Mythos AI, Citing Major Security Risks

The AI developer says its latest model is too dangerous for public use—here's why it matters to Canada.

Anthropic Puts Brakes on Powerful Claude Mythos AI, Citing Major Security Risks
(Global Tech / File)

An artificial intelligence model so advanced that its creators won't let the public use it is raising fresh questions about how rapidly AI technology is evolving—and who should have access to it.

Anthropic, the San Francisco-based AI developer behind the Claude family of AI assistants, has decided to keep its newest model, Claude Mythos, away from general public use. The decision underscores growing concerns about balancing innovation with security risks.

A Powerhouse with Dangerous Capabilities

Claude Mythos represents a significant leap forward in AI capabilities. According to Anthropic's technical documentation, the model excels at software engineering, advanced reasoning, computer use, knowledge work, and research assistance—abilities that substantially surpass previous Claude versions.

But there's a catch that worried the company's leadership: the system has demonstrated sophisticated cybersecurity skills that cut both ways.

"Claude Mythos has demonstrated powerful cybersecurity skills, which can be used for both defensive purposes (finding and fixing vulnerabilities in software code) and offensive purposes (designing sophisticated ways to exploit those vulnerabilities)," Anthropic stated in its technical preview.

That dual-use potential—the same technology that can protect infrastructure could also be weaponized to attack it—prompted Anthropic to take the unusual step of restricting access.

Limited Access, Strategic Partners Only

Instead of releasing Claude Mythos publicly, Anthropic is deploying it through a selective partnership program. Access is limited to "organizations that maintain important software infrastructure," with strict terms limiting use to defensive cybersecurity purposes only.

Branka Marijan, a senior researcher at Project Ploughshares, a Canadian peace and security research organization, supports Anthropic's cautious approach.

"The implications for cybersecurity and broader national security that they are flagging, I don't think that they're hypotheticals. I do think there are actual concerns that we should be paying more attention to now," Marijan said.

The Restriction Dilemma

However, industry experts warn that limiting access to advanced AI may only be a temporary solution. Daniel Escott, CEO of Formic AI, points out a troubling reality: if one company restricts a powerful technology, competitors will simply develop their own versions.

"Anthropic is making their own choices on who they're willing to give access to this system for. But at the same time, I would imagine those partners are probably saying 'you're only allowed to sell to us,' perhaps a limited set of other entities, but they don't want everyone to have access to the same kinds of technology. And if Anthropic isn't going to sell it to them, someone else will develop it and sell it," Escott explained.

Escott also suggested viewing Anthropic's safety documentation with some skepticism, noting that Claude Mythos was trained on similar open-source datasets used for other Anthropic models.

What This Means for Canada

As AI technology accelerates globally, Canada's cybersecurity infrastructure and digital sovereignty will be increasingly affected by decisions like Anthropic's. The tension between advancing AI capabilities and preventing misuse will likely define policy conversations in Ottawa and corporate boardrooms across the country in the coming years.

For now, Anthropic's decision to restrict Claude Mythos signals that even cutting-edge AI companies recognize when they've built something powerful enough to warrant caution—though whether that caution will be enough remains an open question.

This article is based on reporting by Global Tech. For more on this story, visit Global Tech's original coverage.

Share this story