The Rundown↓
KNOW that Common Sense Media published their findings from a 2024 survey gauging teenage trust in AI content.
REALIZE that Common Sense Media has a partnership with OpenAI on “AI guidelines and education materials.”
EXPLORE the report and watch the video of the panel discussion.
Details↓
Common Sense Media released a report at the end of January 2025 on “Teens, Trust, and Technology in the Age of AI.” The data was from an online survey of just over a thousand parents and their teens from March to May 2024. It highlighted AI’s impact on diminished trust in online content and tech companies:
Against the backdrop of declining trust in the authenticity of visual content—especially on online media—35% of teens feel that generative AI systems will make it harder to trust the accuracy of online information.
Among other stats, the document showed that when teens use AI chatbots for homework, 2 out of 5 recognized something incorrect with the output. A third of teens said they had previously been misled by AI-generated content. And most interestingly, 47% of teens don’t believe companies developing AI will make responsible decisions.
Commentary↓
Generative AI is the next iteration of AI within society. As the Center for Humane Technology recently posted on X:
Social media was society’s first contact with artificial intelligence. Generative AI (including image generators, chatbots and more) is society’s second contact.
One of the more eye-opening stats from the report was “Over a quarter (28%) of teens have wondered if they were talking to a chatbot or a human.” That’s problematic when teens look for love and acceptance from chatbots on Character.ai or seek advice from an AI therapy app.
Yet Common Sense Media’s report champions an uncommon approach to equipping young people:
Given that trust in information—particularly online information—faces growing challenges amid the fast-paced development of generative AI, it's more crucial than ever to equip youth with the skills to "investigate, not doubt" the information they come across.
The general idea of "investigate, not doubt" is to avoid cynicism without inquiry. The phrase in the report is hyperlinked to a Harvard panel discussion on media literacy which features Common Sense Media’s Vice President of Outreach & National Partnerships.
While I can appreciate the idea’s open-mindedness so as not to dismiss valid information, it ignores doubt’s role as the spark of investigation. And when we’re no longer able to tell the difference between a human and AI, I would argue skepticism is a great ally.
Curiously enough, my doubt leads me to wonder how Common Sense reconciles the data from this survey, namely teens’ lack of trust in tech companies, with their partnership with OpenAI.
The partnership was announced in January 2024 with Sam Altman, CEO of OpenAI, saying, "AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence.”
Yes, it’s the same OpenAI that has never revealed the raw data it uses to train their models. The company is currently facing over 30 copyright infringement lawsuits including the New York Times.
Indeed, if ethically implemented, AI has the potential to enhance education, but gone are the days of blind faith in emerging technologies. Common Sense Media offers many great resources and reviews, yet the tone of this report implies that teen skepticism in AI is something to overcome. In a way I see this report as a sign of progress.
Postscript↓
A federal lawsuit against Character.ai was filed in December on behalf of two families from Texas alleging the platform exposed their kids to harmful and hyper-sexualized content. Check out our article in November on a previous lawsuit: