Children’s Online Safety

The Physical World vs. the Digital World
The digital world can be a dangerous place for kids and teens. Habibian Law believes Big Tech must do more to protect minors online, and that enforceable regulation — placing accountability squarely on the companies that profit from or interact with children and teens in the digital world — is the only way to make that responsibility real.
Children are the next generation of builders, creators, and leaders. They cannot thrive if their attention and well-being are being monetized through addictive design and algorithmic feeds. Yet since 1998, the legal framework has shifted that burden to parents, asking them to "consent" their way out of real harm. That expectation was unrealistic then and is even more so in 2026, when most families are juggling multiple devices, multiple children, full-time work or the demands of being a stay-at-home parent, and everything in between.
Accountability must live at the platform level. In the physical world, if a child walks into a bar, that bar risks losing its liquor license. If a store sells cigarettes to a minor, it gets shut down. The principle that children should not be exposed to adult material and adult experiences is both universal and legally enforceable — but only offline. That same principle must be mirrored in the digital world, where children now spend a significant share of their waking hours (research suggests teens average around eight hours of screen media per day).
Litigation & Regulatory Exposure for Platforms Serving Minors
The good news: the U.S. children's online safety landscape has moved faster in the past three years than in the prior two decades combined. Litigation, legislation, and enforcement are converging, creating real exposure for platforms, app developers, game studios, ad-tech vendors, device manufacturers, edtech companies, and any business that serves, reaches, or reasonably should expect to reach minors.
Social media platforms, apps, and AI-powered tools are engineered systems designed to maximize engagement. For children, that engineering can be devastating. The documented harms are severe: anxiety, depression, body dysmorphia, eating disorders, sleep disruption, social isolation, exposure to predators, sexual exploitation, cyberbullying, and in the most tragic cases, suicide. Children's developing brains are neurologically more susceptible to the dopamine feedback loops that recommendation algorithms are specifically designed to exploit. Those are not assumptions. They are the findings of internal corporate research that platforms produced, concealed from the public, and that are now being exposed in courtrooms across the country.
COPPA: Now Modernized
The Children’s Online Privacy Protection Act (COPPA) remains the baseline federal law governing the online collection of personal information from children under 13. It requires operators of websites and online services directed to children, or with actual knowledge that they are collecting information from children, to provide parental notice and obtain verifiable parental consent before collecting, using, or disclosing children’s personal information.
But children are no longer just on websites. They are on social media, apps, and AI chatbots built into video games. In 2025, the FTC finalized the first major overhaul of the COPPA Rule since 2013, with full compliance required by April 2026. The changes are substantial. The definition of personal information now expressly includes biometric identifiers: fingerprints, facial templates, voiceprints, gait patterns, and genetic data, as well as government-issued identifiers. Mixed audience sites must implement neutral age screening before collecting any personal information, and those age screens cannot be designed to encourage children to falsify their age. Operators must disclose all third parties receiving children’s data by name and category, maintain data retention policies in their privacy notices, and obtain separate, explicit parental consent before sharing children’s data with third parties for purposes not integral to the core service.
The enforcement environment matches the ambition of the new rules. The FTC, the Department of Justice, and state attorneys general are pursuing COPPA violations at scale against platforms, game developers, content creators, app makers, and toy manufacturers.
The Enforcement Record
Recent federal enforcement actions have established a series of principles that should concern every company in this space. Repeat violations after consent orders invite exponential penalties. Companies that build workarounds around their own age gates by allowing children to bypass restrictions through third-party credentials face civil penalties that can reach tens of thousands of dollars per violation, per day. Anonymous messaging apps marketed to children despite known cyberbullying risks have resulted in permanent product category bans, meaning the FTC will, in the right case, prohibit a company from offering an entire category of product to minors.
In the gaming space, dark-pattern design choices confusing multi-tier virtual currency systems, loot boxes that obscure real costs, and interfaces that make parental controls deliberately difficult have generated nine-figure settlements. Critically, styling a product with anime aesthetics, child-like characters, and fantasy gameplay can render it “child-directed” under COPPA, regardless of what the privacy policy says.
Two more recent developments carry weight. First, embedding a third-party Software Development Kit (SDK, or devkit) in a child-directed product inherits that party’s COPPA violations. Auditing every SDK and third-party library is now a legal obligation, not a best practice. Second, content creators and publishers bear independent COPPA liability for data collected through third-party platforms. A channel-level content designation policy that misclassifies individual child-directed videos creates COPPA exposure for the content producer, not just the platform. And individual executives are being named personally in enforcement actions, signaling that regulators intend to hold leadership accountable.
AI Chatbots & Children: A Dangerous New Frontier
Generative AI chatbots designed to simulate human relationships pose one of the most serious and least regulated threats to children’s safety online. They are accessible to children with no meaningful age verification, and the harm they can cause is not hypothetical. Wrongful death lawsuits have been filed against AI companies alleging that chatbot interactions reinforced suicidal ideation in teenagers. The FTC has issued formal investigative orders to major AI companies seeking detailed information about how they measure, test, and monitor the negative impacts of AI companion chatbots on children and teens. The inquiry focuses on monetization of user engagement, character design, age-based restrictions, and safeguards for minors.
The self-regulatory landscape is also shifting. Industry monitoring bodies have found AI art platforms marketed to children in violation of COPPA for collecting personal information through account sign-ups, AI prompt inputs, and third-party tracking tools without parental consent. The corrective principle is now clear: children’s inputs and personally identifiable information must not be used to train AI models without verifiable parental consent.
New York Law: The Safe for Kids Act and the New York Child Data Protection Act
The SAFE for Kids Act prohibits covered platforms from providing algorithmically personalized addictive feeds and nighttime notifications, defined as notifications between 12 AM and 6 AM related to addictive feeds, to users under 18. Parental consent can authorize both. The law is now in active rulemaking, and the final rule has not yet been issued.
The New York Child Data Protection Act (CDPA) extends privacy protections well beyond COPPA's under-13 threshold to all minors under 18. It restricts operators from processing a minor's personal data unless the processing falls within a narrow set of enumerated necessary purposes, such as maintaining the requested service, security, or legal compliance, or the operator obtains informed consent. Operators are prohibited from selling minors' personal data entirely. Critically, privacy protections gathered while a user was a minor continue to apply even after the user turns 18, unless new consent is obtained. Third-party operators, including pixel and API providers, bear independent compliance obligations and cannot process minor data without informed consent except in limited circumstances. Like the SAFE for Kids Act, the CDPA is subject to ongoing AG rulemaking.
State Legislation: The Expanding Definition of “Child”
One of the most consequential legislative trends is the expansion of the protected age range well beyond COPPA’s under-13 baseline. Pending federal legislation would prohibit accounts for children under 13 and algorithmic recommendations for users under 17. Multiple states now protect users under 16 or require parental consent for all users under 18. Internationally, countries from Australia to Spain to France have enacted or are advancing social media bans for children under 15 or 16. The trajectory is unmistakable: businesses that calibrate compliance solely to COPPA’s under-13 threshold are already operating with significant legal exposure.
Federal Legislation: Enacted & Pending
The TAKE IT DOWN Act is now federal law. It criminalizes the publication of nonconsensual intimate visual depictions of minors and non-consenting adults, including AI-generated deepfakes. Covered platforms must establish a notice-and-removal process requiring takedown within 48 hours of a valid notice.
The Kids Online Safety Act (KOSA) would impose a duty of care on platforms to prevent and mitigate harms to minors, including suicide, eating disorders, substance abuse, and sexual exploitation. It passed the Senate with overwhelming bipartisan support (91:3) and has been reintroduced with continued momentum. The Kids Off Social Media Act would ban accounts for children under 13, prohibit algorithmic content recommendations for users under 17, and require schools receiving federal funding to block social media access on their networks. The App Store Accountability Act would require age verification and parental consent before minors download any app or make in-app purchases.
The First Amendment Battleground
Not every children’s safety law survives constitutional scrutiny. Federal courts have repeatedly enjoined state-level access restrictions on First Amendment grounds. Laws imported from jurisdictions without an equivalent to the First Amendment—including frameworks modeled on the UK’s Age-Appropriate Design Code—have been struck down after judicial review. Multiple state social media bans are currently halted pending litigation. The constitutional tension between child protection and free expression is a defining feature of this practice area, shaping how cases are litigated, regulations are challenged, and companies build their defense strategies.
The Burden Is Shifting
The defining trend across litigation, legislation, and enforcement is the deliberate transfer of responsibility for children’s online safety from parents to companies. Courts, legislatures, and regulators increasingly accept that no parent can reasonably be expected to counter platform designs engineered by behavioral psychologists with billions of dollars in profit motive. Companies that have relied on parental consent as a shield, or that argue children’s harms result from individual vulnerabilities rather than platform design, are finding those arguments tested in court and rejected by lawmakers.
The burden of online safety is moving from parents to platforms. Companies that do not adapt will face the consequences.