(Bloomberg) -- Young people often spend long hours on social media despite mounting evidence that it’s not good for them. Health experts list harms as varied as lost sleep, eating disorders and suicide.
In one of the boldest steps yet to address the problem, Australian lawmakers passed a bill on Nov. 29 to bar children under 16 from setting up accounts on social media sites including Facebook, Instagram, Snapchat and TikTok.
In France, Norway and the UK, similar bans have either been proposed or discussed. In the US, a growing stack of lawsuits accuse the social media giants of knowingly hooking kids before they reach their teens, taking a cue from behavioral techniques used in the gambling and cigarette industries. If enough of them succeed, it could force the companies to change how they engage with younger audiences.
How will Australia’s social media ban work?
The legislation passed by Australia’s parliament would go into effect in about a year, barring children under the age of 16 from setting up accounts on popular social media sites. Google’s YouTube got an exemption because of its widespread use in schools. Online messaging and gaming services such as WhatsApp and Discord are also excluded.
The owners of the banned platforms will be responsible for enforcing the age limit, with penalties of as much as A$50 million ($33 million) for breaches. However, it’s unclear how the platforms will verify ages: The Australian government has banned the uploading of official documents such as passports due to privacy concerns. Kids who find a way past verification controls won’t be fined nor will their parents.
How did social media companies react to the Australian decision?
A spokesperson for Meta Platforms Inc., which owns Facebook, Instagram and Threads, said the company is “concerned about the process which rushed the legislation through while failing to properly consider the evidence, what industry already does to ensure age-appropriate experiences, and the voices of young people.”
TikTok, owned by China’s ByteDance Ltd., said the legislation was “rushed” and “unworkable,” and riddled with “unanswered questions and unresolved concerns.” Snapchat owner Snap Inc. said previous international attempts at broad and mandatory age verification have failed.
X, known as Twitter before it was bought by Elon Musk, said it had “serious concerns as to the lawfulness of the bill,” suggesting a possible court challenge.
Are they likely to respect a ban?
The companies included in the ban will be required to comply with the law once it goes into effect, or accept a fine. Meta said in a statement that it will respect the ban, and Snapchat pledged to cooperate with government regulators, NPR reported.
But it’s unclear how the platforms will verify ages if they’re not allowed to rely on government-issued documents. Meta, for its part, told Bloomberg before the Australian social media ban passed that it plans to use artificial intelligence to catch teens lying about their age.
What are other countries doing?
France’s government passed a bill in 2023 requiring parental authorization for a child under 15 to open a social media account. French Education Minister Anne Genetet has suggested the EU follow the example of Australia and enforce a minimum age for using social media, Politico reported. Norway wants to impose a minimum age of 15 after data showed many children under 13, the current age limit, still use popular platforms, the Guardian and other publications reported in October. And a possible social media ban for children under 16 is “on the table” in the UK, Technology Secretary Peter Kyle told the BBC in November.
In the US, state lawmakers have enacted a flurry of laws mandating that platforms consider the privacy and protection of children in the design of services they are likely to access, as well as measures governing the nature and terms of minor access to social media. In several states, court challenges have led judges to declare such laws unconstitutional.
What are the concerns with the Australian approach?
One limitation of merely issuing bans is that it doesn’t curb the output of harmful content, Lisa Given, a professor of information sciences at RMIT University in Melbourne, told Bloomberg.
“This legislation is really ill-conceived,” she said of the Australian ban. “It’s a simple proposed fix for something that’s actually really complicated. And where did 16 come from? It seems like it was pulled out of the air.”
Unicef, the United Nations agency for children, said Australia’s ban would push young people into darker, unregulated places online. “Instead of banning children, we should hold social media companies accountable for providing age-appropriate, secure, and supportive online environments,” Katie Maskiell, head of child rights policy and advocacy at Unicef Australia, said in a submission to the Australian parliament.
What’s happening in the US lawsuits?
Children, adolescents and young adults — sometimes via their parents, siblings or other family members — have filed hundreds of personal injury lawsuits against Meta, ByteDance, Alphabet Inc. (Google’s parent and owner of YouTube) and Snap over claims of psychological distress, physical impairment and death. Separately, public school districts have brought hundreds of cases seeking to have the platforms declared a public nuisance for disrupting learning. And dozens of state attorneys general have targeted Meta and ByteDance with suits accusing the companies of using harmful features to keep children on the platform longer to maximize profits. The companies have denied wrongdoing and Meta has appealed some early rulings that have gone against it — which will likely delay trials that a federal judge in Oakland, California, was aiming to schedule for late 2025.
What’s the legal foundation for the lawsuits?
The lawsuits accuse the platform owners of designing their services to hook children, an audience that academic and medical studies show is particularly vulnerable to addiction while their bodies and minds are still developing. The personal injury cases revolve around claims of product liability, similar to those that drove decades of litigation over cigarettes, asbestos, faulty medical devices and harmful prescription drugs, with mixed success.
The school districts’ public nuisance suits rest on a theory similar to lawsuits over the opioid addiction crisis, which so far have resulted in drug makers, distributors and retailers agreeing to pay almost $50 billion in settlements, and litigation blaming e-cigarette maker Juul Labs Inc. for a youth vaping epidemic across the US.
In general, the lawsuits allege that the social media giants, borrowing behavioral and neurobiological techniques from the gambling and cigarette industries, design endless, algorithm-generated feeds to induce young users into a so-called flow state. In that state, users react to incessant notifications that manipulate dopamine levels, encourage repetitive account checking and reward round-the-clock use. Addictive use of social media results in an array of psychological disorders, and in extreme cases self-harm and suicide, according to the lawsuits. And addictive use delivers the most valuable prize: troves of data about young users’ preferences, habits and behaviors that are sold to advertisers.
What do the companies say about the lawsuits?
They say they offer ample resources to keep children safe online and argue that the lawsuits improperly seek to regulate content. In the past, the first line of defense for social media companies has been Section 230 of the Communications Decency Act, the 1996 federal statute that shields companies from liability over comments, ads, pictures and videos on their platforms. The companies have persuaded judges handling the addiction cases to dismiss some claims based on Section 230. But negligence and public nuisance claims have been allowed to moved forward.
--With assistance from Kurt Wagner.
©2024 Bloomberg L.P.