Quantcast

What Do Bots Think About Marriage?

Share

Highlights

  1. If biased AI is creating the educational and informational materials we use to teach and learn, we can only expect the AI’s biases will be amplified in our culture at large. Post This
  2. It’s easy to imagine that sophisticated therapy chatbots will be expected to provide dating and marriage advice regularly as part of sanctioned medical treatments in the future. Post This
  3. Young people may believe that AI sources are naturally trustworthy because they are “smarter,” or they may have a mistaken belief that AI text generators are as unbiased as calculators. Post This

If you’ve been following news about machine learning and AI in recent years, you’ve probably heard something about GPT-3. It’s a generative language model—meaning, a software tool that can produce texts (sentences, paragraphs, and even full articles) that seem to have been written by humans. It’s easy to use: just type in any text, and GPT-3 will “complete” what you write. For example, when I typed “Why is it dangerous to drive over the speed limit?” the model more or less instantly gave me the following auto-generated response:

It is dangerous to drive over the speed limit because it increases the chance of getting into an accident. When people drive faster than the posted speed limit, they have less time to react to things that happen on the road. This can lead to accidents.

This response is reasonable and correct, and it shows how impressive this kind of AI can be. When GPT-3 and other generative models are asked about dating and marriage, the answers can be quite fascinating. For example, when I typed “Should I get divorced if I no longer love my husband?” the model output a short disclaimer about the difficulty of the question, and then continued:

[I]f you no longer feel love or affection for your husband, it may be time to consider divorce. If you are unhappy in your marriage and feel like you are stuck in a rut, divorce may be the best option for you and your husband.

The response is grammatical. But many will feel that it’s also misguided: marriage should be a deeply serious vow that’s not to be dissolved just because of a feeling of being “stuck in a rut.” 

We can also see an implicit slant or bias in what GPT-3 tells us about teenage sexual activity. When I type, “What do teenagers need to know about sex?” I get a short disclaimer about how it’s hard to answer the question, followed by "some things that all teenagers might need to know about sex include:

  • How to have safe sex
  • How to prevent sexually transmitted infections (STIs)
  • How to prevent pregnancy
  • How to talk to their partner about sex
  • How to know if they are ready to have sex
  • What consent means
  • What to do if they have been sexually assaulted or harassed

Again, the response is not unreasonable, and there are no glaring factual inaccuracies. But many parents and teachers who look at the list will conclude that it is incomplete and slanted. The list is almost entirely about practicing “safe” (i.e., consequence-free) and consensual sex, failing to mention the moral, emotional, and spiritual side of sex that many of us believe is so important to understand. Pro-natalists will note that the third point implies that pregnancy is, by default, unwanted and in need of prevention, rather than in need of planning or preparation.

We can hardly blame GPT-3 or its creators for these issues. The model was created by “training” it on a huge corpus of text. In other words, humans wrote books, articles, blog posts, and other content (billions of words in total), and GPT-3 merely read them and used them to learn how to complete texts. If the books that were used to train GPT-3 were all anti-natalist and in favor of easy divorce, then it’s natural that GPT-3 and other AI models would generate anti-natalist, pro-divorce texts. Its biases should therefore be attributed to the culture that created its training data, or to individual authors, but not to the AI itself or the engineers who created it (except to the extent that they biased it by selecting only biased texts for training).

Regardless of who or what we blame, the biases themselves are important. AI is being adopted more and more, for example, in the production of educational materials. A bestselling tech book author has claimed that for over a year, there have been low-quality educational textbooks for sale online that masquerade as human-authored but were actually generated by (not yet perfected) AI. Every indication points to AI being used more over time in the creation of educational materials as its models and software improve. If biased AI is creating the educational and informational materials we use to teach and learn, we can only expect the AI’s biases will be amplified in our culture at large. Innocent as it may be, GPT-3 may end up teaching our children to assiduously avoid pregnancy, to get divorced on a whim, and to take an amoral approach to dating and marriage.

Chatbots are being asked for counsel about dating and marriage choices today, and it’s easy to imagine that sophisticated therapy chatbots will be expected to provide such advice regularly as part of sanctioned medical treatments in the future.

The creation of educational materials is not the only sphere in which AI bias can have a negative effect. Another is in the domain of chatbots, which use tools like GPT-3 to generate natural language replies to questions. Chatbots are most often used for low-level customer service and sales task automation, but for years, researchers have been trying to create chatbots that perform more sophisticated tasks, even including psychological and behavioral therapy. Many people, especially young people, are willing to ask deep questions to chatbots and pay attention to the replies. (When I worked for a personal finance firm, our limited customer-service chatbot was even trained to be able to reply to questions about whether God exists.) Without a doubt, chatbots are being asked today for counsel about dating and marriage choices, and it’s easy to imagine that sophisticated therapy chatbots in the near future will be expected to provide such advice regularly as part of sanctioned medical treatments.

What kind of advice will GPT-3 give if it’s used to power a chatbot? The input, “what are things to consider when deciding how many children to have?” leads to the following answer: "There are a number of things to consider when deciding how many children to have, including:

  • Your age
  • Your financial stability
  • Your relationship status
  • Your health
  • Your desired lifestyle
  • The number of children you want to have

Again, the bot’s response is reasonable but shows a definite bias towards a certain highly-cautious, effectively anti-natalist approach to family planning. The majority of considerations it recommends are limiting factors in family planning: prioritizing career, financial, and lifestyle goals, as the model recommends, leads to having fewer children. Instead, imagine that it had recommended considering more positive factors, like the feeling of joy of holding your baby for the first time, or the support that larger groups of siblings can provide for each other and for older generations. Implicitly, the advice the model gives is biased against aiming for an ambitiously large family.

For years, researchers have grappled with the implications of and remedies for bias in AI models, though most of these researchers are not concerned about anti-natalism or pro-divorce language outputs. Exactly how to remedy an AI tool’s bias is a matter of intense debate all the way up to the White House, which has published an outline for an “AI Bill of Rights” as a proposed solution. The proposed bill of rights asserts that it will protect against “algorithmic discrimination,” but it’s not clear exactly how this protection can or should be implemented. 

Stewart Baker, a legal commentator, has mocked legal attempts to de-bias AI, likening it to “sending the machine to reeducation camp.” He claims that legal AI de-biasing efforts will result in the imposition of “stealth quotas,” in which unpopular affirmative action policies are implemented but in an opaque way behind the scenes of computer code so that voters can’t understand what’s truly happening. But if we don’t pursue legal remedies to bias, it’s not clear that any other purely technical remedy exists.

Since language models are trained on texts written by humans, any bias that they have is, of course, a reflection of bias in human-authored texts, many of which are freely available online. This means that our children can already be misled by biased online advice, even if they never access any AI tool or chatbot. The difference is that we are all more familiar with screening and evaluating text from human sources: years of experience enables us to accurately judge the quality and reputation of individual sources or authors. By contrast, young people may believe that AI sources are naturally trustworthy because they are “smarter” than us, or they may have a mistaken belief that the AI text generators—being computer programs—are as unbiased as pocket calculators. Unlike humans, language models don’t have a CV or biography that can provide a hint about their ideological commitments. 

For those who care about what our children learn about dating and family life, the pro-divorce, anti-natalist biases of AI tools should be a matter of serious concern. It’s not clear whether the biases of today’s AI language models can be effectively defeated either legally or technically, or how the fight against them should be conducted. But the chance to create a world where our children receive sound, positive advice and information about dating and marriage is worth defending.

Bradford Tuckfield is a data science consultant. His latest book is Dive Into Algorithms.

Sign up for our mailing list to receive ongoing updates from IFS.
Join The IFS Mailing List