Meta’s next move: Creepy AI facial recognition glasses that scan kids in public

Meta CEO Mark Zuckerberg is all-in on the company’s surveillance glasses. The company has plans to embed AI facial recognition into the next version of its Ray-Bans, above, which could run into legal trouble due to COPPA, the federal law protecting kids’ data.

Feb. 19, 2026 — Meta, the company that brought addictive algorithms to social media, is apparently doubling down on its record of bad decisions.

Even as they face a widely-watched courtroom reckoning over the company’s campaign to hook kids on Instagram, Meta executives are making plans to add AI-driven facial recognition capabilities to its smart glasses, which are manufactured by Ray-Ban and Oakley.

The New York Times last weekend reported: “The feature, internally called ‘Name Tag,’ would let wearers of smart glasses identify people and get information about them via Meta’s artificial intelligence assistant.”

exactly why safeguards are needed

In other words, anybody wearing Meta’s spy glasses can scan a child’s face and call up a trove of data about them. Or post their face online. Or use it to create deepfakes.

Gross. Outrageous. Creepy. Dangerous.

The spy glasses plan comes as Meta CEO Mark Zuckerberg made global headlines this week on the witness stand.

As reported by Wired, Zuckerberg showed up in court on Wednesday “to answer questions as to whether Meta products such as Facebook and Instagram were intentionally engineered to be addictive—as well as allegations that the tech giant had deliberately targeted tweens and teens with engagement-boosting strategies that led to mental health crises.”

His testimony comes as part of a monumental social media addiction lawsuit filed by parents of kids harmed by the addictive algorithms designed by YouTube and Meta’s Instagram.

In 2024, Zuckerberg claimed, “Our job is to make sure that we build tools to help keep people safe,” and insisted, “We are on the side of parents everywhere working hard to raise their kids.” The evidence tells a different story: Meta targeted kids and tweens to “hook them young,” pushed them toward harmful content to maximize engagement, and knew the risks all along, with internal studies warning of the damage, even comparing the company’s practices to Big Tobacco.

Old ‘COPPA’ law desperately Needs updating

Children’s health advocates are raising alarming questions about the potentially harmful effects of facial recognition surveillance on kids.

One immediate concern: Facial recognition glasses could violated COPPA, the Children’s Online Privacy Act (COPPA) of 1998. That’s a federal law that requires operators of online services to obtain parental consent before collecting, using, or disclosing personal information of any user under 13 years of age. COPPA mandates strict privacy policies and data security—but it was written nearly 30 years ago, prior to the rise of social media and artificial intelligence.

New products raising deep concerns

It seems clear that the collection of the most personal information—a child’s own face, linked to data about their identity—could qualify as a violation of COPPA. What’s unclear is whether Meta officials are willing to risk that legal exposure in the name of enhanced sales.

There are, of course, deeper issues beyond the immediate legal concerns. The exposure of kids both under 13 and over 13 to constant video digital surveillance, and the ability to call up a trove of data about a person based on a passing glance, raises questions about the freedom of a child to develop, learn, change, stumble, fail, succeed, and engage in the awkward work of growing up while under the surveillance of their peers, passing strangers, and the global digital world.

Lawmakers at the state and federal level have introduced COPPA updates, but most have stalled. In Congress, the leading bill is known as the Kids Online Safety Act (KOSA). That bipartisan measure would protect minors (up to age 18) by requiring digital platforms to implement strict safety measures, such as default privacy settings, parental controls, and mitigation of risks like cyberbullying and self-harm. It introduces a "duty of care" for tech companies, allowing state AGs and the FTC to enforce compliance.

Other bills have had a measure of success at the state level. Last year Arkansas adopted a state version of COPPA 2.0. That law provides online privacy protections for minors through age 16 by limiting the information that companies can collect without permission.

political tumult as cover for a product release

The Times noted that “Meta’s plans could change. The Silicon Valley company has been conferring since early last year about how to release a feature that carries ‘safety and privacy risks,’ according to an internal document.”

The memo from Meta’s Reality Labs added: “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Ray-Ban and Oakley entering uncharted waters

The addition of AI facial recognition could also bring unwanted legal and reputational risk to Ray-Ban and Oakley, eyewear companies that have spent decades establishing positive brand reputations with consumers. Both companies are owned by EssilorLuxottica, a global corporation that accounts for roughly one-quarter of all eyewear sales.

Manufacturing a product that could potentially violate federal COPPA standards could open both companies to new lawsuits, as well as a backlash from consumers.

A history of privacy intrusions

Meta’s history of data and personal privacy intrusions is well known.

In 2024, the company paid $1.4 billion to settle a lawsuit in Texas over the company’s practice of capturing and using the personal biometric data of millions of the state’s residents without the authorization required by law.

One year earlier the company paid $68.5 million to settle a lawsuit in Illinois for alleged violations of the state’s biometric privacy law.

In 2019, the company paid $5 billion to the Federal Trade Commission to settle a lawsuit that accused it of violating user privacy, including the use of facial recognition software.

MORE ABOUT KIDS SAFETY ONLINE


Previous
Previous

AI Legislative Update: Feb. 20, 2026

Next
Next

Action in Olympia: Chatbot safety bills cross over in Washington legislature