Tech Companies Face Child User Lawsuits

John Lister's picture

Google has launched sets of two legal cases against scammers. The cases involve artificial intelligence tools and copyright claims, though could have wider implications for malicious online activity as a whole.

The first case is against scammers who are trying to take advantage of interest in generative AI tools that can "create" text and images. They’ve made online posts and ads encouraging people to download Google’s AI tool, Bard.

Tech giants including Facebook and Google must go to court to fight claims they cause physical and mental harm to children. The companies failed in a bid to have more than 140 cases dismissed.

The lawsuits involve Alphabet (parent company of Google and YouTube), Bytedance (TikTok), Meta (Facebook and Instagram) and Snap (Snapchat).

They are facing cases from 140 school districts, while Meta faces a separate case brought by 42 states and the District of Columbia. (Source:

It's a particularly complicated set of cases as it involves multiple elements of the online services. They come under the general umbrella of negligence that harmed children, both by designing unsafe products and failing to warn defects. Many of the claims involve children being addicted to using the sites and "suffering anxiety and depression."

First Amendment Bid Fails

The tech companies had sought to have the cases dismissed entirely, arguing that they were protected by the First Amendment. They also pointed to Section 230 of the Communications Decency Act, which gives online platforms immunity for what users post.

The judge said the First Amendment made little difference to the case as many of the changes the plaintiffs are asking for "would not require that defendants change how or what speech they disseminate."

However, she said Section 230 did offer protection in some aspects, meaning the relevant claims must be dropped from the lawsuits. This includes tech companies not limiting the time children can spend on the platform and recommending a child's account to an adult (for example, as a potential online friend).

Age Checks Inadequate

The cases can still proceed with the remaining claims, meaning courts will have to rule on whether the tech companies took certain actions and whether those actions were lawful. These include not letting parents know or control how long children spent online, not adequately verifying user ages, and making it too difficult to delete an account.

At the time of writing, Alphabet had said the allegations were "simply not true" while Bytedance said it had "robust safety policies and parental controls." Meta had not publicly commented while Snap declined to comment to The Verge. (Source:

What's Your Opinion?

Do you think the case will succeed? What legal limits should apply to how tech platforms offer and run accounts for children? Do you agree with the judge's interpretation of how free speech protection applies in this case?

Rate this article: 
Average: 4 (2 votes)