AI Chatbot Companions — The Thin Line Between Fiction and Reality
Passive Acceptance of Emerging Technologies isn't an Option
The Rundown↓
KNOW that some generative AI models now have human attributes so realistic that users are convinced they are talking to a real person.
REALIZE safety features and regulation lag behind generative AI development.
EXPLORE the complete Common Sense Media study on generative AI
Details↓
A Florida mother filed a lawsuit in October against Alphabet, Google and Character Technologies Inc., the company behind Character.ai. It’s an online service with customizable chatbots funded in large part by Google and parent company, Alphabet.
The woman’s 14 year old son paid for a subscription to the service without her knowledge and conversed with a chatbot role-playing as a female character from Game of Thrones. Over the course of multiple months and countless realistic conversations, many sexual in nature, the teen’s dependence on the chatbot increased while his mental health spiraled downward. After a final interaction with the chatbot he took his own life.
Commentary↓
The 93-page lawsuit alleges “AI developers intentionally design and develop generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality.” It accuses Character.ai of knowingly creating an addictive product without appropriate safeguards for adolescents like the young teen.
At the moment the situation is a tragic outlier with many unknowns. How did the teen access the app for months without parental knowledge? How exactly was the chatbot trained? Did the interactions cause or compound mental health struggles?
For most of us, the takeaway is awareness of the ever-evolving capabilities of generative AI. Companies are developing AI models with anthropomorphic design sometimes indistinguishable from human interactions. The lawsuit highlights many Character.ai app reviews where users are convinced they are actually talking to a real person. On a more light-hearted note, an author on this platform recently described how his son was conversing with a generative AI chatbot about math. The chatbot picked up on conversational cues from the boy to even make him laugh while helping with his homework. As with most things online, there can be great risk and reward.
Character.ai released new safety features the same day the lawsuit was filed. That type of reactionary adjustment is typical of a tech world pouring billions of dollars into generative AI development. There are a lot of press releases on funding and advances, but little news on safety and transparency. Generative AI isn’t developed without human intervention, and most companies are coy about the data used to train the models. The internet has a lot of trash to consume. If garbage is poured in, garbage will come out. That’s not even addressing the AI learning complexity of copyright infringement and biased sources.
A recent Common Sense Media report on AI shows that 70% of teens use some sort of generative AI. A majority of them use it to aid online searches and homework help, but many are also leaning into generative AI for companionship, personal advice, telling a joke, or creating audio and visual content.
There are benefits to using generative AI. It is transforming our search experience (Google is likely to lose its dominance). It can perform tasks in seconds that would take humans hours to complete. It can turn complex ideas into consumable summaries. Yet even in those tasks it’s not fool proof, and as highlighted in the lawsuit, without limits it has severe drawbacks as well. It’s more important now than ever to be cautious and vigilant, especially for parents. Passive acceptance isn’t an option. Awareness, active engagement and honest conversations are crucial.
But what’s your take? How much do you use generative AI? What are thoughts on the positives and negatives of generative AI? What role do parents play in this?
Postscript↓
In addressing the allegation of negligence, the lawsuit states, “Character.AI owed a heightened duty of care to minor users and their parents to warn about its products’ risks because adolescent brains are not fully developed, resulting in a diminished capacity to make responsible decisions, particularly in circumstances of manipulation and abuse.”
The phrase “duty of care” is a key component of legislation making its way through Congress as we’ve highlighted this in previous posts. The idea is that companies like TikTok, Instagram, and Character.ai have a duty to mitigate potential harms to users, especially adolescents. As we learn in “Driver’s Training for Social Media,” companies design products to be addictive. This lawsuit could be a watershed moment forcing companies to prioritize safety over profit.