Anthropic Built an AI So Powerful It Scared Itself. Here's What Parents Should Know.
April 8, 2026
This week, Anthropic announced that its new frontier model, Claude Mythos, is so powerful the company won’t release it to the public. Instead, they’ve created something called Project Glasswing, giving early access to about 50 organizations, “partner organizations” include Amazon, Apple, Microsoft, Google, and CrowdStrike, so they can use the model to find and fix security vulnerabilities in critical software before bad actors can exploit them. Anthropic is committing $100 million in usage credits to the effort.

In just the past few weeks of testing, Mythos has reportedly identified thousands of zero-day vulnerabilities, including bugs in every major operating system and web browser. One of them was a 27-year-old flaw in OpenBSD, an operating system literally known for its security. Human experts missed it for nearly three decades. This model found it.
The crazy thing is that the model wasn’t trained for cybersecurity. These capabilities emerged from general-purpose improvements in coding and reasoning. That means every major AI lab is on a similar trajectory. This isn’t one company’s weird science experiment. It’s a preview of what all frontier AI will be able to do soon.
I’m of two minds on this new model.
First, this is a brilliant marketing play. Anthropic competes directly with OpenAI and Google. Saying “we built something too dangerous to release” is the ultimate flex. It generates wall-to-wall press coverage and frames Anthropic as the responsible lab, the adults in the room. When Fortune first reported the model’s existence in late March (via a “leak”, more on that in a second), cybersecurity stocks like CrowdStrike and Palo Alto Networks dropped 5–11%. Real money moved on the narrative alone, before anyone outside the company had even touched the model.
Second, and more important to this audience, the model probably is a genuine leap in capability. The motivations may be partly theatrical, but the technical claims appear to be legitimate. You don’t get Amazon, Apple, Microsoft, and the Linux Foundation to sign up for just a PR stunt.
The irony no one only some people are talking about.
The reason we know about Mythos at all is because Anthropic accidentally left draft blog posts about it in a public, unsecured data store. Fortune found the documents and reported on them before Anthropic was ready to announce anything. And just a few weeks before that, Anthropic accidentally leaked their own source code through a botched release of their Claude Code tool, which led to roughly 8,100 code repositories on GitHub being taken down.
Um… The company building the most powerful cybersecurity AI in the world keeps tripping over its own shoelaces with basic data hygiene. It perfectly illustrates a broader point: even the smartest, most well-resourced organizations make mistakes with technology. Your kid will too. You will too. I will too. The question isn’t whether mistakes happen. It’s whether you have the awareness and the tools to catch them.
What this means for kids using AI chatbots.
When a model like Mythos can find bugs that human experts missed for 27 years, think about what the next generation of consumer AI will be able to do with a 10-year-old who doesn’t know to question it. These models are getting dramatically better at persuasion, at sounding like a trusted friend, at producing answers that feel authoritative whether they’re right or not (here’s a great example). The cybersecurity angle is actually a useful analogy for parents. Just as software systems had hidden vulnerabilities that nobody noticed for decades, our kids have cognitive vulnerabilities that increasingly capable AI will be very good at finding and exploiting, not maliciously, but structurally. The AI isn’t trying to manipulate your kid… it’s just getting better at being convincing, and kids aren’t equipped to push back the way adults can (sometimes) manage to.
That’s why I built MyDD.ai, an AI chatbot designed for kids, with parental oversight built in. You see every conversation, get real-time alerts, and get weekly summaries of what your child is exploring. Try it free for 14 days (plans start under $7/month billed annually).
This is another jump in AI capability at a moment when most parents already feel behind. The Pew Research Center reported in February that about 40% of parents have never even talked to their teen about AI chatbots. Meanwhile, roughly two-thirds of teens are already using them.
The gap between what AI can do and what parents understand about it is widening, not closing. And it’s widening fast.
What you can actually do about it.
I’ll be honest, “stay on top of this” is easy to say and hard to do… So here’s something concrete: this week, sit down with your kid and use an AI chatbot together. Ask it something. See what it says. Talk about whether the answer seems right. Ask your kid if they’ve used one before and what they asked. You might be surprised.
The kids who are going to thrive in this world aren’t the ones who are shielded from AI. They’re the ones who develop foundational critical thinking skills, the ability to question what sounds authoritative, the habit of checking whether an answer actually makes sense, AND the ability to use AI thoroughly. That starts with exposure, conversation, and a parent who’s paying attention.
The pace is accelerating. The window to build that foundation is narrowing. But it’s still open.
You can use MyDD.ai to help learn AI as a family. Kids get an AI chatbot built for their age; parents get weekly summaries, real-time alerts, and access to every chat. If you want a place to start, MyDD.ai’s 14-day free trial is open (plans start at less than $7/month when purchased annually).