Safe AI for Kids Doesn't Have to Be All or Nothing
March 18, 2026
I have been figuratively booed out of conversations after introducing myself as the co-founder of MyDD. Cursed at by strangers on Facebook for advertising MyDD. I have also heard podcasts where the guest allows his children unlimited access to all technology and literally never says no to his kids.
AI for kids doesn’t have to be all or nothing. It shouldn’t be all or nothing. Nothing should be all or nothing (wait…).

Why do I say this? As I spend more time talking to parents and educators, I keep running into vocal minorities at the tails of the distribution on opinions of their kids’ AI use. On one side, there are the parents who think that all technology, including (and maybe especially) AI, is bad. On the other side, there are parents who think that AI and technology are the future that their children need to be immersed in.
I think the reasonable answer lies in the middle. Let’s use the technology and AI responsibly, think critically about it, and update our opinions as more evidence emerges. Not all technology is bad and not all technology is good. The right answer is harder and comes when we dive into the nuance.
Most families don’t have enough bandwidth to come up with a personalized view on a brand new technology. They probably aren’t at either extreme; they’re just not engaging yet. Pew Research data from February suggests that about 42% of parents of teens have never talked to their children about AI chatbots.
Not sure how to start that conversation? Here’s a guide.
That is why we need to come together to come up with reasonable rules that the majority of us can agree to, with options to opt-out or add extra if that is what your family desires.
Here’s where I’d start. Three principles most families could agree on:
- AI should be age-appropriate. A chatbot shouldn’t give the same answer to a 10 year old, 14 year old, and 40 year old. We don’t speak to children this way, neither should chatbots. If you’re wondering whether ChatGPT itself is safe, here are 5 questions to ask first.
- AI should be a guide, not a crutch. Just as we don’t answer every question for our children (sometimes we say, “what do you think it is?” or “how would you answer that?”), the chatbot should do the same. It needs to push them to continue to think critically, not think critically for them. That’s a skill we can’t afford to outsource.
- Parents need to know what kids are saying to the chatbot. I’m not talking about surveillance. Think of it like school. We expect to know what our kids are learning at school — not everyday, moment-by-moment updates, but measured, proactive updates on their progress. And when things go wrong, you find out immediately. We should expect the same from the AI our children are using: a dashboard to see conversation topics, weekly summaries of conversations, and real-time alerts when necessary. That’s exactly what MyDD.ai does. Try it free for 14 days.
We need to accept, as a society, that each family should do what is right for them. I can empathize and understand why a family wouldn’t want their child to use technology or AI — these are new things and there is a lot to be concerned about. It is easy to slip from safe to unsafe very quickly and not every family i s equipped to deal with that. On the other end, I understand how powerful these technologies are. Having a child who understands and is comfortable with the evolving landscape will have a head start on their contemporaries.
If you’re not sure where to start, start with visibility. Know what your children are asking. Come up with boundaries together.
That’s why I built MyDD.ai. So parents can see what their kids are actually saying to AI. Try it free for 14 days.