Social Media May Soon Come with Warning Labels

Written by PTC | Published June 21, 2024

There were two important developments this week in the fight to protect kids online.

First, in an Op-Ed published Monday in the New York Times, U.S. Surgeon General Vivek Murthy called for warning labels on social media platforms, stating that social media is associated with significant mental health harms for adolescents.

We are all familiar with the Surgeon General’s warning that was added to tobacco products beginning in 1965. And many people did -- and still do -- question whether it would have any impact on public health or stop teens from smoking; but consider how much public attitudes about smoking have changed over the decades. Did the warning have an immediate impact on cigarette sales? Probably not. But it unquestionably did have a real and lasting impact on awareness around the significant health risks associated with smoking and tobacco use.

Will a warning on social media stop teens from using these platforms? Maybe not. But hopefully it will help raise awareness around the significant threat social media poses to youth mental health, and that will encourage more parents to look closely at what their children are doing online.

The second significant development was the introduction Tuesday of the “TAKE IT DOWN Act,” by Senators Manchin (I-WV), Cruz (R-TX), and Klobuchar (D-MN), among others. This bi-partisan effort would criminalize the publication of non-consensual, intimate imagery (NCII), including AI-generated NCII, and require social media and similar websites to remove such content upon notification from a victim.

This legislation is especially timely in light of news reports about real students’ faces being placed on AI-generated nude bodies.

Children and teens are becoming the targets of deepfake pornography. Right now, tech and social media platforms aren’t prioritizing taking down this kind of content, leaving families nowhere to turn for justice. It is appalling that children are being subjected to this kind of abuse, which is magnified as it spreads on the internet.

It’s unfortunate that once again Congress will be forced to act in the absence of common sense and basic decency from social media companies, which should be more proactive in detecting and removing these images. These companies are using AI tools already, they know what the dangers are and how easily they can be exploited by bad actors.

These are both welcome, and sadly, necessary developments in the face of the significant challenges and risks posed by social media. Especially in the lives of vulnerable children.

I believe we can make a difference in the lives of America’s children and teens – but it’s clear that these companies simply cannot be trusted to take necessary preventative measures to keep kids safe online. That’s why we must act to protect them.

Because Our Children Are Watching.

Take Action. Stay Informed.