Highlights
- How can we equip families to approach the next stages of AI with confidence and clarity? Post This
- From meal planning to tracking busy family schedules, AI is already in our kitchens, living rooms, and even bedrooms. Post This
- Most of us didn’t (and possibly still don’t) realize that through social media, we’ve been playing with Artificial Intelligence (AI) the whole time. Post This
It has been more than a decade since social media became commonplace. This once-novel way to connect with far away friends quickly became a five-hour daily habit for teenagers. The word ‘algorithm’ has moved out of upper-level math classes and into our daily vocabulary, though many of us still struggle to define it. Parents, politicians, health professionals, and theologians are now grappling with how, when, and if children and teens should be allowed any social media access, as blame for the teen mental health crisis has been laid squarely at the feet of many social media platforms.
Most of us didn’t (and possibly still don’t) realize that through social media, we’ve been playing with Artificial Intelligence (AI) the whole time. To say that we are living in the midst of a digital revolution may prove to be the understatement of the century.
But before culture can catch up to the new terminology and determine the next best steps for children and teens in regard to social media, AI has morphed. From ChatGPT to Tesla’s Optimus 2 robot to AI pornography, what will come next is anyone’s best guess.
It must be said, not all AI is bad or immoral, or should be feared, but for those platforms that are dangerous to human flourishing, how will the average family be able to recognize them as such? Despite its ubiquity, it has taken close to 15 years to recognize the dangers of social media. How can we equip families to approach the next stages of AI with confidence and clarity?
Recently, IFS reported that 1 in 4 young adults believe AI partners could replace real-life romance. In Switzerland, a Catholic chapel has been running an experiment using an “AI Jesus” who dispenses moral advice. It should be no surprise, then, that at the October 2024 “We, Robot” event, Tesla’s Optimus robot says the hardest thing about being a robot is “trying to learn how to be as human as you guys are.” The prevailing goal of these AI bots (should they hit the marketplace in the coming years) is not to assist humans, but rather to be regarded as equal to humans.
Though these outlandish uses of AI are becoming more commonplace (and therefore less outlandish to our ears), there remains a great deal of AI being used each day by families across the developed world, much without us even noticing. From meal planning to tracking busy family schedules, AI is already in our kitchens, living rooms and even bedrooms. For example, after an exhausting workday, fathers in some households use AI to outsource bedtime story telling to children who long for cuddles and connection. For the mother who has just heard, “I hate you!” for the first time from her teenage daughter, this blow can be softened by an AI chatbot offering advice and words of encouragement that in generations past would have come from a mentor or friend.
Most of us didn’t (and possibly still don’t) realize that through social media, we’ve been playing with Artificial Intelligence (AI) the whole time.
With AI reaching into almost every nook and cranny of our lives, it is difficult for parents to know how to make sense of it all. We haven’t even touched on deepfakes, plagiarism, or AI-generated art.
The type of AI we use today is known as Narrow AI. This type of AI can accomplish one, or a small number, of narrow tasks. Narrow AI has been used by scientists to diagnose disease or illness at a far faster rate than a human can process information. In the mundane, we use Narrow AI when we say, “Siri, what’s the weather like today?” Our phones search a predetermined set of databases and inform us that there is a 60% chance of afternoon showers. Whether or not we grab a raincoat on the way out the door is still up to us. When Netflix recommends a new show to watch, we decide whether or not to instead turn off the television and take the dog for a walk. Narrow AI works off algorithms. The weather app will not start recommending television shows based upon your likes and dislikes and Netflix won’t be used in a science lab to identify pre-cancerous cells. The scope of these AI platforms is narrow in that they are not capable of making moral or ethical judgement calls.
Parents should take time to educate children on how to make informed decisions, ask good questions, and maintain autonomy apart from using Narrow AI. The human element of decision-making should not be divorced from Narrow AI. Meaning, children and teenagers need to first learn good decision-making skills, which stems from maturity, before they are given independent access to Narrow AI tools. Human development occurs over months and years, while algorithmic recommendations happen in fractions of seconds based upon the data they hold, not the ethical standards or morals of the user. It is outside of the scope of Narrow AI, such as social media platforms or ChatGPT, to respond to a parent’s best wishes or hopes for their child. (This is, in part, why Australia recently raised the minimum age for social media access and bills like KOSA in the U.S. Congress are receiving large, bi-partisan support.)
The second type of AI is known as Artificial General Intelligence, AGI, or General AI. For this type of AI, you ought to think of C3PO from the Star Wars movies. This type of AI is still theoretical at best. When considering the future implications of AGI, parents should take time to train their children on what it means to be human. Answering this basic question is no small feat but it is vital for the future with AGI. In 2084: Artificial Intelligence and The Future of Humanity John Lennox writes,
It is, after all, easy to make the assumption that AI will improve human beings—but that may not necessarily be the case… It is surely important that those with transcendent ethical convictions should have a seat at the ethics table when discussing the potential problems of AI.
In 2024, the questions around AI are firmly in the Narrow AI camp, though they are starting to feel more like General AI. With the rise of LLMS, or Large Language Models such as Open AI’s ChatGPT or Google’s Gemini, “AI Girlfriends” have settled into our world with some raised eye brows and little knowledge of the long-term outcomes (Though the initial reporting is terrifying.) A Jesus chatbot might cause many of us to bristle, but will our children see this Jesus as an extension of religious wisdom or rather as the Narrow AI that it is?
From romantic relationships to religion to parenting, AI has slowly crept into every area of our lives, and many of us have given it very little thought. In part, this is because many of us don’t really know what AI is. But we can no longer look away or roll our eyes at the outlandish possibilities. Families need to be equipped to address these topics head-on and from a place of confidence, which can’t be programed into us by an algorithm. At least, not yet.
Emily Harrison is a writer, advocate and speaker on digital media and family. She is a Fellow with the Colson Center for Christian Worldview, Ambassador for the Phone Free Schools Movement, and ScreenStrong, and member of Fairplay’s Screen Time Action Network. She blogs weekly at DearChristianParent.Substack.com.