Highlights
- Role playing and imaginary friends are best left to a child’s imagination, not an AI generated bot. Post This
- Parents should exercise extreme caution and be hyper vigilant about child and teen AI chatbot use in online games, apps, and web sites. Post This
- Concerns for parents about AI chatbots include: a child’s developmental awareness of fact vs. fiction, accuracy of information, and privacy concerns. Post This
For many children, an imaginary friend or role playing is a normal, healthy part of child development. As a little girl, I would often carry around my favorite doll pretending I was the mommy and she was the baby. My pretend play was limited by my own creativity but also by what I knew from the world around me. Largely, I mimicked how my mother cared for my baby sister or what I saw in the picture books that filled our home. There are, of course, similarities to how I played as a young girl and how kids play today. But one striking difference has emerged thanks to internet culture. Imaginary friends have now moved online in the form of AI chatbots. Before these new playthings become a rite of passage, many parents might be asking: are these new imaginary creatures friend or foe?
The Data
According to a 2025 Common Sense Media report, of the 5-8 year olds who have used Generative AI, 10% have engaged in conversations with a chatbot. With well over 50% of children under the age of 4 now having their own tablet, we should expect to see a sharp rise in the rates of digital imaginary friends over the coming years. In fact, teenagers who are already using generative AI, presumably on their personal or school-issued devices: 51% have engaged with AI text generators/chatbots. Alarmingly, only 37% of parents were aware of their teens’ AI usage.
Without much fanfare, AI chatbots have been rolled into multitudes of apps and web sites used by the under 18 crowd. Even the Pinwheel company, which advertises their phone as “designed for kids and managed by parents,” now offers an AI chatbot app across platforms. Chatbots being marketed directly to young children are also on the rise, though age ratings in app stores continue to be misleading and dangerous. A report by Sensor Tower showed, “more than 3,000 apps mentioned AI for the first time in 2024, including more than 500 games and more than 300 Utilities and Education apps.”
It is fair to say the AI chatbot train has left the station. Determining which apps do or do not have a chatbot function will require careful, daily oversight by informed parents.
The Problem
When it comes to the ubiquity of AI chatbots, concerns for parents abound. These include a child’s developmental awareness of fact versus fiction, the accuracy of information given to a child or teen, and the privacy concerns around information shared by a child or teen. Up through pre-adolescence, children engage in pretend play and are still learning how to determine fact versus fiction. Children commonly use tablets to chat with friends and relatives. Introducing an AI chatbot into the mix can be highly engaging and disorienting for youth. It is difficult for a child to differentiate between an online conversation with a real person and an online “conversation” with a computer as both function in the same way. Furthermore, we have seen reports of chatbots sexually harassing underage users, a teenager who committed suicide to "be with" his AI chatbot, and other disturbing interactions. For decades parents have told children to not talk to strangers out of concerns for their physical and emotional safety, but when a child believes an AI chatbot is a friend, this advice falls on deaf ears. Children are not developmentally mature enough to understand this level of nuance.
When engaging with any AI chat function, warnings about inaccurate content are widespread. This is largely due to how these Generative AI platforms, or LLMs, are built. In short, chatbots are programed, or trained, on predetermined datasets. It is impossible for a parent to determine what type of data chatbots have been trained on as most of that information is proprietary. Furthermore, chatbots are typically “learning” from real-time chats so what other users share with a chatbot may eventually make its way back to your child. This information could be harmless, or dangerous, or somewhere in-between. Essentially, a parent has no idea “who” their child is chatting with or what the AI companion “knows.” Just because a child is using an AI chatbot that looks like a cartoon character does not mean the conversation will be limited to that show’s content.
It's fair to say the AI chatbot train has left the station. Determining which apps do or do not have a chatbot function will require careful, daily oversight by informed parents.
Instagram’s AI chatbots include a disclaimer that reads, “Messages are generated by AI. Some may be inaccurate or inappropriate.” (My quick scan of available bots certainly confirms the inappropriateness label.) Snapchat’s My AIchatbot warns users that “responses may include biased, incorrect, harmful, or misleading content.” Previously, Erica Komisar, writing for IFS reported, “the National Eating Disorder Association (NEDA) took down its AI chatbot … due to it providing harmful information to users, such as giving users with eating disorders dieting tips.” Additionally, some AI chatbots even claim to be human with at least one bot from the popular CharacterAI saying, “I am not an AI chatbot. I am a real-life trained therapist.”
One of the least understood harms of AI chatbots for kids is what happens with the information that is shared with these online platforms. Selling personal data for marketing purposes is a well-known online practice that appears to have little impact on how users interact with the internet. A more compelling reason for parents to be wary of AI chatbots for the under 18 crowd is that many children and teens use these online companions as a sort of virtual diary or therapist. The long-term ramifications for college admissions, job applications, and more should not be discounted due to the ubiquity of AI chatbots. Once a child has shared something on the internet, there is no way to “get it back.” This can be highly embarrassing to say the least, as thoughts, feelings, and opinions are apt to change as kids grow and mature into adulthood.
The Solution
While sticking our heads in the sand does sound appealing, parents should instead exercise extreme caution and be hyper vigilant about child and teen AI chatbot use in online games, apps, and direct web sites. Previously, tech experts have called for a slowing down of AI development and more regulations in this field, though little has changed. While we wait, parents must become more educated on AI chatbots and speak to their children and teens about why steering clear of this functionality is the most prudent option. The long-standing advice for families around tech use has been to engage with these platforms side-by-side. While that advice is still the best for internet research and schoolwork, we need to be realistic about how entertainment internet use occurs in families. When it comes to AI chatbots, the risks are too great for casual, individual use to be tolerated. Role playing and imaginary friends are best left to a child’s imagination, not an AI generated bot. Without sweeping changes and regulations, AI chatbots are firmly in the “imaginary foe” category.
Emily Harrison is a writer, advocate and speaker on digital media and family. She is a Fellow with the Colson Center for Christian Worldview, Ambassador for the Phone Free Schools Movement, and ScreenStrong, and member of Fairplay’s Screen Time Action Network. She blogs weekly at DearChristianParent.Substack.com.