The FTC’s New Age Verification Policy: Safe Passage Through the COPPA Catch-22—But Biometric, State, and Global Questions Remain
For years, digital platforms have faced a regulatory paradox at the heart of children’s privacy compliance. Under the Children’s Online Privacy Protection Act (COPPA), obligations turn on several factors: whether a service is primarily directed to children, whether it is a mixed-audience platform that opts to implement age-screening, and which categories of personal information are collected and for what purposes. For mixed-audience operators that choose to identify and filter out users under 13, the paradox is acute as the most effective technologies for making that determination often require collecting the very “personal information” the statute is designed to protect.
Two converging forces have made this paradox impossible to ignore any longer. First, age verification technology has matured significantly. Methods once considered futuristic, such as facial age estimation, document-based ID verification, and statistical inference from behavioral signals, are now commercially viable and increasingly accurate. Second, a wave of laws across the U.S. and abroad has begun mandating age verification outright: state social media access restrictions, adult content platform laws, and international frameworks like the EU’s Digital Services Act (DSA) and Australia’s enacted social media ban for users under 16 have collectively made “we don’t know how old our users are” an untenable legal posture. The FTC’s aggressive enforcement action regarding children’s privacy compliance reflects this new reality.
On February 25, 2026, the Commission formally addressed the COPPA catch-22 with a landmark policy statement, signaling a new era of enforcement discretion and creating meaningful incentives for companies to move beyond easily bypassed “honor system” age gates toward robust, high-assurance verification technologies.
This development is a double-edged sword. While it offers a provisional safe passage for qualifying businesses, it resolves neither the friction with state and other biometric privacy laws nor the expanding regulatory gap created by state children’s codes and international frameworks. The technology may be ready. The legal landscape is not.
The Core Shift: From Age Gating to Age Verification
Historically, many websites relied on a “neutral age gate,” typically a simple date-of-birth field. Because a birthdate alone was generally not treated as “personal information” under COPPA unless combined with other identifiers, this approach avoided triggering the statute’s parental consent requirements. Critically, it also avoided collecting data that could be used to protect children in any meaningful way.
As the legislative and regulatory environment has intensified around child safety, with state privacy laws proliferating, federal pressure mounting, and platforms facing both legislative mandates and litigation risk, the FTC is now actively encouraging “age verification”: technologies such as facial age estimation, uploads of government IDs, and behavioral inference. These methods are more reliable, but they raise their own data collection concerns.
What changed? Under the new policy statement, the FTC will not pursue enforcement against qualifying operators that collect personal information, including biometric data, from a child before obtaining parental consent, provided that the data is used strictly and exclusively to determine the user’s age. As Ogletree Deakins notes, this represents a meaningful departure from prior FTC guidance, which left companies in a difficult position when trying to use high-assurance methods without first triggering COPPA’s consent requirements.
The Six Pillars of FTC Enforcement Discretion
This is not a blanket immunity. To qualify, businesses must satisfy six criteria. As Cooley LLP’s analysis makes clear, failure to satisfy even one pillar puts an operator outside the scope of enforcement discretion and back into ordinary COPPA exposure:
- Purpose Limitation. Verification data may only be used to determine age. It cannot be repurposed for marketing, behavioral profiling, or AI/algorithm training.
- Data Minimization. Information must be deleted immediately after the age determination is made—no retention, no secondary storage.
- Third-Party Vendor Oversight. If using an external verification vendor, the operator must obtain written assurances regarding data security and confidentiality, which is generally accomplished via a robust data processing agreement (DPA).
- Clear and Conspicuous Notice. Privacy policies must explicitly disclose the collection and use of verification data in terms accessible to both parents and children.
- Reasonable Security. Appropriate technical and organizational safeguards must protect sensitive verification data throughout the process.
- Accuracy Standards. The verification technology itself must be “reasonably accurate,” a standard the FTC has left deliberately flexible for now, but which will likely be defined more precisely in forthcoming rulemaking.
The “Primarily Child-Directed” Exclusion
One of the most consequential nuances in this policy statement is its explicit carve-out for services primarily directed to children.
Mixed-audience platforms—think YouTube, TikTok, or a general social media app—can use this leniency to screen and filter users before data collection occurs. But if your service is designed for children, the FTC still expects you to treat every user as a child by default. You cannot verify your way out of COPPA obligations on a child-directed site. Verifiable Parental Consent (VPC) must be obtained before any data collection begins, full stop.
This distinction matters enormously at the product design stage. A “mixed audience” framing that is not credibly supported by your actual user base, marketing, or content strategy will not survive regulatory scrutiny.
The State Law Problem: BIPA and Beyond
Federal enforcement discretion travels only so far. Companies that implement biometric age verification remain fully exposed to state law claims. Our biometric privacy compliance practice regularly sees this tension, with some of the most significant risks coming from Illinois.
The BIPA Trap. Under the Illinois Biometric Information Privacy Act, collecting a minor’s biometric identifier requires prior written parental consent. The FTC’s policy statement does not provide protection against a BIPA class action. If you scan a child’s face to verify their age, even in full compliance with the FTC’s six pillars, you may be simultaneously violating BIPA and exposed to statutory damages of $1,000 to $5,000 per violation, per person.
State “Kids Code” Laws. As we have written previously, “Under 18 Is the New Under 13” when it comes to the expanding scope of children’s privacy protections. California, New York, and a growing list of states have adopted or are advancing age-appropriate design frameworks that extend meaningful protections to users under 18, not just under 13. The FTC statement is narrowly tailored to COPPA’s under-13 threshold, leaving a significant compliance gap for the 13–17 cohort that state privacy laws are increasingly covering. Any verification strategy calibrated only to the federal standard will be underinclusive.
Global Implications
The international landscape is moving faster and further than U.S. federal law, driven by many of the same forces that prompted the FTC’s statement. Governments have increasingly concluded that voluntary age-gating does not work, and are now legislating affirmative verification mandates:
- Australia has enacted a social media ban for users under 16, imposing a verification standard that substantially exceeds COPPA’s age threshold.
- The EU’s Digital Services Act (DSA) requires large platforms to take systemic measures to protect minors, with age verification as an increasingly expected component of compliance.
- The UK’s Age Appropriate Design Code (Children’s Code) mandates that services likely accessed by children default to high-privacy and protection settings, with verification increasingly expected to operationalize those defaults.
A U.S. company implementing the FTC’s “safe” verification framework must assess whether that technology is robust enough to satisfy more demanding international obligations, including the GDPR and other laws, and build that flexibility into its vendor contracts and system architecture now, rather than retrofitting compliance later.
Bottom Line: A Meaningful Step, Not a Final Answer
The FTC’s February 2026 policy statement is a pragmatic, if overdue, response to a genuine regulatory deadlock. But it is a policy statement, not a statutory change or a final rule. With formal COPPA rulemaking on the horizon, the compliance landscape will continue to shift, likely before 2027.
What is clear is this: the era of plausible deniability around underage users is over. Regulators, plaintiffs’ attorneys, and state Attorneys General are no longer willing to accept the premise that a date-of-birth field constitutes meaningful age verification. The convergence of technology maturity and legal mandates, from social media laws to adult content restrictions to the DSA, has permanently changed the calculus.
Adopting high-assurance verification technology is no longer just a best practice; it is increasingly a business necessity. But doing it right requires more than following the FTC’s six pillars. It requires a multi-jurisdictional privacy compliance strategy that accounts for state biometric and other privacy laws, state children’s codes, and the international frameworks that your users’ locations may trigger.