In the movie “Dazed and Confused,” Matthew McConaughey’s character remarks, “That’s what I love about high school girls, I keep getting older and they stay the same age.” In the real world, there are plenty of good reasons to do age verification and validation. However, doing so violates some of the most basic principles of data security and data privacy.
There is a long-standing principle in privacy engineering that lawyers and security architects alike have embraced: If you don’t need the data, don’t collect it. Data minimization is not merely good hygiene—it is risk management. The less you collect, the less you have to protect, the less you can lose, and the less you can be compelled to disclose.
Modern age-verification regimes are turning that principle on its head.
They require companies to collect precisely the kinds of sensitive personal data they neither need nor want—date of birth, age bands, geolocation, device identifiers—and then impose affirmative obligations to maintain, secure, and in some cases transmit that data. In effect, the law is mandating the creation of new honeypots of sensitive information under the banner of protecting minors.
The result is not simply a compliance problem. It is a structural inversion of privacy and cybersecurity norms.
From “Don’t Know” to “Must Know”
For decades, online services operated under a deliberate ambiguity with respect to user age. Under the Children’s Online Privacy Protection Act (COPPA), operators of online services directed to children under 13, or those with “actual knowledge” that they are collecting personal information from such children, must comply with stringent parental consent and data handling requirements. 15 U.S.C. §§ 6501–6506 (2018).
.
The key term—“actual knowledge”—became the fulcrum of compliance strategy. If a service was not directed to children and did not ask for age, it could avoid triggering COPPA obligations. Courts and regulators largely accepted this framework. See, e.g., FTC v. Accusearch Inc., 570 F.3d 1187, 1197 (10th Cir. 2009).
(recognizing knowledge-based liability structures in privacy enforcement).
State-level innovation—particularly in California—is dismantling that equilibrium.
The California Digital Age Assurance Act (and related legislative proposals) contemplates a system in which operating systems transmit age signals to application developers at the moment of download or account creation. Once that signal is received, the developer is no longer willfully blind. It has “actual knowledge” of the user’s age. And with that knowledge comes a cascade of federal and state obligations.
This is not merely a change in compliance posture. It is a forced re-architecture of systems designed specifically to avoid collecting this data in the first place.
The Creation of Unwanted Data
The paradox is stark. Age verification regimes compel entities to collect:
– Date of birth or age band
– Identity credentials (in some implementations)
– Location data (to determine jurisdictional rules)
– Device-level identifiers
These are precisely the categories of data that privacy professionals have spent decades trying to avoid collecting absent necessity.
From a cybersecurity perspective, this creates new attack surfaces. The Federal Trade Commission has repeatedly emphasized that the aggregation of sensitive personal information increases both breach risk and liability exposure. See FTC, Protecting Consumer Privacy in an Era of Rapid Change (2012).
.
And yet, under these regimes, companies must not only collect the data but also retain it long enough to demonstrate compliance—often without any independent business justification for doing so. This is data collection as regulatory compulsion, not operational necessity.
Age Verification as Dual-Use Surveillance
The policy justification for age verification is straightforward: protect minors from harmful content and predatory data practices. But the implementation reveals a more complex—and troubling—reality. In some contexts, age verification operates as a shield. It prevents third parties from contacting minors or collecting their data without parental consent. This aligns with COPPA’s original intent: To limit the commercial exploitation of children’s information.
In other contexts, however, age verification becomes a tool of monitoring and enforcement. It is used to:
– Restrict access to lawful but age-limited goods (e.g., alcohol, lottery tickets)
– Track and potentially sanction user behavior
– Create auditable records of attempted access
This raises a fundamental question: Is the system designed to protect minors, or to surveil them?
The distinction matters. The Supreme Court has repeatedly warned against regulatory regimes that burden access to lawful speech under the guise of protecting children. In Reno v. ACLU, 521 U.S. 844, 874 (1997), the Court invalidated provisions of the Communications Decency Act, noting that “[t]he Government may not ‘reduce the adult population … to … only what is fit for children.’” Likewise, in Ashcroft v. ACLU, 542 U.S. 656, 667 (2004), the Court emphasized that less restrictive alternatives—such as user-side filtering—must be considered before imposing broad content restrictions.
Age verification regimes that require universal identification or age disclosure risk running afoul of these principles by imposing burdens on all users, not just minors.
Knowledge as Liability
What makes the California model particularly significant is its explicit coupling of technical architecture with legal consequence. By requiring operating systems to transmit age signals, the law eliminates plausible deniability. Once a developer receives the signal, it cannot claim ignorance. It must comply with COPPA, state privacy laws such as the California Consumer Privacy Act (Cal. Civ. Code §§ 1798.100–1798.199), and potentially a growing patchwork of state-level youth privacy statutes.
This is a knowledge-forcing regime. And knowledge, in this context, is not power—it is liability.
Cascading Compliance and System Design
The deeper implication is that compliance is no longer primarily a function of policy. It is a function of architecture. Engineers are now being asked to build systems that:
– Collect age-related data at the operating system level
– Transmit that data to downstream applications
– Trigger automated compliance workflows based on that data
– Retain records sufficient to demonstrate compliance
Each of these design decisions carries legal consequences. This is a marked departure from traditional regulatory models, where obligations attach to conduct rather than code. Here, the code is the conduct.
And once embedded, these design choices are difficult to unwind. They create path dependencies that shape not only compliance strategies but also user experience, data flows, and security posture.
The Inevitable Tradeoff
At a policy level, age verification regimes force a tradeoff that is rarely acknowledged explicitly.
To protect minors, we must identify them.
To identify them, we must collect their data.
To collect their data, we must create risks that the data will be misused, breached, or repurposed.
In other words, the act of protecting privacy can itself erode privacy. This is not an argument against protecting minors. It is an argument for recognizing that the mechanisms we choose matter. Systems that rely on centralized collection of sensitive data may solve one problem while creating several others.
Privacy-enhancing technologies—such as zero-knowledge proofs or on-device age estimation—offer potential alternatives. But they are not yet widely deployed, and many statutory frameworks do not explicitly accommodate them.
Conclusion: Designing for Ignorance
The most striking feature of these emerging regimes is that they eliminate the option of strategic ignorance.
For years, companies could say, truthfully, “We do not know the age of our users.” That statement carried both legal and technical significance. It limited liability and reduced risk.
Under knowledge-forcing architectures, that option disappears. Systems are designed to know whether the operator wants to or not.
The law is no longer asking what you know. It is telling you what you must know—and what you must do once you know it.
That shift has profound implications for cybersecurity, privacy, and the future of system design. Because in a world where knowledge equals liability, the safest system may be the one that knows the least.
And increasingly, that is precisely the system the law will not allow you to build. Alright, alright, alright.
Recent Articles By Author
Mark Rasch Actual Knowledge, Age Verification, California Consumer Privacy Act, California Digital Age Assurance Act, Coppa, cybersecurity risk, Data Collection, Data Minimization, Data Privacy, Honeypots, identity management, Knowledge-Forcing, Legal Liability, privacy engineering, Privacy Enhancing Technologies, Regulatory Compulsion, Strategic Ignorance, system architecture, Tech Policy, zero-knowledge proofs
