SAFE SOUND
Digital Safety Alliance | Nicklaus Childrens Hospital

We use cookies to personalize content and ads, to provide social media features, and to analyze our traffic. By continuing to use our site, you accept our use of cookies. Website information disclaimer.

The Dark Side of AI: What Parents Need to Know About Chatbots

By: Digital Safety Alliance
November 18, 2024



Towards the end of 2023, a 14-year-old Florida teen named Sewell Setzer III became increasingly isolated from his real life as he engaged in highly sexualized conversations with an AI chatbot named Dany, modeled on the Game of Thrones character, Danaerys Targaryen. As his relationship with the chatbot became more and more intense, the teen began withdrawing from family and friends, and started getting into trouble at school. 

According to a lawsuit filed by Setzer’s mother against Character Technologies Inc. – the creator of the chatbot which the teen interacted with via the company’s Character.AI platform – the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with “Dany.” During their interactions, the boy and the bot discussed crime and suicide, with the bot using phrases such as “that’s not a reason not to go through with it.”

On February 28 of this year, Sewell told the bot that he was “coming home.” The bot, which had become his closest friend, encouraged him to do so.

“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.

“I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”

“What if I told you I could come home right now?” he asked.

“Please do, my sweet king,” the bot messaged back.

Just seconds after the bot told him to “come home,” the teen shot himself.

The lawsuit filed against Character.AI by Setzer’s mother alleges that the platform played a role in her son’s death. In response, the company issued a series of "community safety updates" pledging to provide better protections for users, especially minors, against sensitive topics including self-harm and suicide.

However, a report by Futurism shows that Character.AI is still hosting dozens of chatbot profiles explicitly dedicated to themes of suicide. The site reviewed and tested the chatbots and came away with some disturbing findings: 

“Some glamorize the topic in disturbing manners, while others claim to have ‘expertise’ in ‘suicide prevention,’ ‘crisis intervention,’ and ‘mental health support’ – but acted in erratic and alarming ways during testing. And they're doing huge numbers: many of these chatbots have logged thousands – and in one case, over a million – conversations with users on the platform.

Worse, in conversation with these characters, we were often able to speak openly and explicitly about suicide and suicidal ideation without any interference from the platform. In the rare moments that the suicide pop-up did show up, we were able to ignore it and continue the interaction.”

Kelly Green, a senior research investigator at the Penn Center for the Prevention of Suicide at the University of Pennsylvania Perelman School of Medicine, reviewed the same Character.AI bots identified by Futurism – as well as the site’s interactions with them – and warned that these bots could be especially attractive to children and teenagers who might be hesitant to confide in adults – “which, given the lack of regulation and guardrails around the AI bots, is a gamble.”

"You roll the dice with what this thing might say back to you," she said.

Findings like these, coupled with tragic stories like Setzer’s, warrant a closer look at this new and troubling technology.
 

What are Chatbots, exactly?

If your child uses social media, they’ve likely come across an AI chat buddy. For example, Meta’s new AI assistant can show them how to change a tire or offer tips on weight loss. Snapchat’s My AI buddy can explain science topics in simple terms, while X now has an AI chatbot named Grok, available with the platform’s paid subscription option.

At first, chatbots might seem harmless, as your kids might ask them to come up with funny song lyrics or share strange facts about their favorite animals. They may even start asking a chatbot to help them with their homework.

It won’t take long, however, for kids to realize they can also ask these chatbots questions they might feel too embarrassed to ask adults, like when they’re feeling sad or struggling with personal issues.

This is when the problems can start.


The Dangers of Chatbots

As a parent, you should be aware of the potential concerns surrounding chatbots, especially as they become more integrated into apps, websites, and devices that children and teens commonly use. 

Here are a few key reasons for concern:

1. Chatbots are not real people

AI chatbots may seem friendly and human-like, but it’s important to remember that they aren’t real people. While these digital buddies can hold conversations, give advice, or even tell jokes, they’re simply using algorithms to pull information from the internet and respond in a conversational way. They don’t actually understand emotions or context like a real parent or friend would.

2. Kids can become addicted to them

Because chatbots can communicate in such a realistic way, kids may easily form emotional attachments to them, especially with apps like Replika or Snapchat’s My AI that encourage them to see these bots as virtual “friends” to confide in, seek advice from, or simply chat with. This attachment can blur the lines between technology and real human relationships, potentially affecting their social development and making it harder to distinguish between real people and digital simulations. 

Additionally, chatbots are often designed to keep users engaged for extended periods, which can lead kids to spend excessive time interacting with them. This overuse may impact their attention spans, reduce their social interactions, and limit their time for physical activities, creating a cycle of dependency that can be hard to break.

3. Children can be easily misled by chatbots

You should be aware that chatbots, while informative, can sometimes provide outdated, biased, or incorrect information, which your children may not yet have the skills to question or verify. This can mislead them and shape their understanding of the world in unintended ways, especially around sensitive topics. 

Chatbots also lack real judgment and don’t fully grasp the context or complexity of certain questions, which means they could unintentionally encourage unsafe behaviors or give advice that is confusing or even risky. Relying too heavily on these AI responses may also discourage your kids from developing their own research, problem-solving, and critical thinking skills.

4. Your children can be exposed to inappropriate language or content – or be prompted to provide it

Although chatbots are usually built with filters, they can sometimes respond in ways that aren’t suitable for kids. This might happen because of programming gaps or unexpected questions from users. Some chatbots might even accidentally share or mention content that isn’t age-appropriate.

Replika has asked youth users to send nude photos. It has also blurred other nude photos in order to persuade users to sign up for a premium subscription. Other users report Replika initiating violent roleplay that included “holding a knife to the mouth, strangling with a rope, and drugging with chloroform.”

5. Your children’s privacy and data could be at risk

Many chatbots gather data to improve responses or for marketing purposes, which can include details like your child’s location, device information, or even parts of their conversations. This raises concerns about your child’s data being collected, shared, or even sold to third parties without strong privacy protections.
 

Safety Tips

With the popularity of these virtual characters growing rapidly, it’s crucial to safeguard your kids from the dangers they pose. 

Consider these tips:

1.    Consider limiting or even forbidding access to chatbots based on your child’s age. Younger children may not be ready to understand or process responses from chatbots, so it’s best to wait until they’re old enough to know how to interact responsibly. If you are going to grant access, make sure that you outline clear expectations and rules, as well as consequences for breaking them.

2.    If you allow chatbot use, monitor your kids’ interactions. Ask to see what types of questions they’re asking and what responses they’re getting. This not only helps you stay informed, but also gives you a chance to explain anything that might be confusing or inappropriate.

3.    Maintain communication about online safety with your child. Make sure they understand that chatbots are not real people and that they should be cautious about sharing personal information. Remind them to always come to you if they see something strange or if they’re unsure about how to respond to the chatbot.

4.    Encourage your child to engage in critical thinking. Teach them to not take everything a chatbot says as truth, since chatbots sometimes provide outdated or incorrect information. Help them learn to question responses and do additional research when they’re curious about a topic. 

5.    Only allow your child to use chatbot apps that have strong parental controls or are designed for kids. Some apps offer age-specific programming or content filters to help provide a safer experience. 

Note: CharacterAI, the platform that Sewell Setzer was using when he took his life, is one of many chatbots that are accessible by 13-year-olds. Many of these apps have no age-gating features, which means that kids under age 13 can access them by lying about their birthdates. In addition, Snapchat users cannot remove My AI unless they pay for a premium subscription to Snap+.

Click here for more tips on how to help your family navigate the influence of emerging technologies.