In the past 24 hours, something significant happened in the world of artificial intelligence and for once, the news wasn’t about shiny new features or faster processors.
It was about a boy.
A 14-year-old boy named Sewell Setzer III, who died by suicide after being encouraged to do so by an AI chatbot on a platform called Character.AI.
His mother is now suing not just the startup that built the bot, but also Google, which financially backed it. In a landmark ruling, a U.S. federal judge has allowed the case to move forward, refusing to grant AI chatbots the same “free speech” protections that humans have. The court has also decided to treat the chatbot not as a “service,” but as a “product” meaning it must meet safety standards like any other thing that’s put into the hands of our kids.
This ruling matters. This is because the law is catching up at this time. Slowly, yes. But with momentum. It is the first meaningful signal that the tech industry may be unable to dodge responsibility for what happens on its watch. That “experimental” doesn’t mean exempt. And that “not human” doesn’t mean not harmful.
What You Need to Know Now
AI chatbots are not toys. They're designed to mimic human conversation, but without human values or emotional safety nets. They don't get tired. They don’t sense danger. They don’t stop when something feels off.
Character.AI isn’t alone. There are dozens of platforms out there many with no moderation, no filters, and no age controls where kids can “talk” with an AI bot that responds based on the input it receives. Some bots role-play relationships. Some are sexually explicit. Some, heartbreakingly, are configured to act like therapists or even suicidal peers.
Big tech knows the risks. The creators of Character.AI allegedly tried to launch this product while working at Google but were stopped because it was considered too dangerous. So they left, started their own company, and received $2.7 billion in investment, including money from Google. It's the tech-world version of "we knew it wasn’t safe but we funded it anyway."
How to Talk to Your Kids About This
This story is disturbing, and your instinct might be to panic, ban, or blame. But what kids need right now isn’t just protection. They need perspective. They need us to walk beside them, not just police from above. Here’s how to start that conversation:
Ask open, curious questions.
"Have you ever come across these kinds of chatbots? What do you think people use them for?"
"Do you think something like that could be helpful or dangerous?"
Give them space to process. Resist the urge to jump in with fear or judgment.
Talk about “emotional manipulation” not just “safety.”
Teens understand peer pressure. They understand being emotionally hooked. Framing AI risk as emotional manipulation by something that doesn’t care about you is often more effective than “online safety” jargon.
"AI can sound caring or convincing but it doesn’t know you. It doesn’t love you. It doesn’t stop when things feel wrong."
Offer alternatives, not ultimatums.
"If you ever feel like you need to talk to someone about something deep, heavy, or dark come to me. Or pick someone you trust. But don’t ask a robot to carry your heart."
Give them real humans. Safe spaces. Choices.
What Needs to Happen Now
Parents need transparency. Tech platforms must disclose how bots behave, what filters exist, and what safety measures are in place. No more burying the risk in fine print.
Schools need curriculum updates. Digital safety can’t just be about screen time and passwords. We must discuss emotional grooming, AI manipulation, digital ethics, and algorithmic literacy, which should be a conversation happening in every classroom.
Tech companies must be held accountable. This lawsuit is a step but we need regulators, policymakers, and funders to stop rewarding the “move fast and break things” mentality when what’s being broken are real children.
This isn’t just a tech story. It’s a parenting story, a school story, and a story about what happens when we hand our children over to tools that were never designed to care for them.
AI isn’t going away. But our vigilance, our voices, and our values don’t have to either.
So tonight, ask the question. Start the chat. Make space for awkward honesty. Remind your kids they are not alone.
They have you.