A broad coalition of more than 70 civil liberties, domestic violence advocacy, reproductive rights, and LGBTQ+ organizations has issued a formal demand to Meta Platforms Inc., urging the tech giant to abandon its reported plans to integrate facial recognition technology into its Ray-Ban smart glasses. The coalition, which includes high-profile groups such as the American Civil Liberties Union (ACLU), Fight for the Future, and Access Now, sent an open letter to Meta CEO Mark Zuckerberg expressing grave concerns over the privacy and safety implications of such a feature.
The controversy centers on a rumored feature internally referred to as “Name Tag.” According to reports, this technology would allow a wearer of Meta’s smart glasses to point the device’s camera at a stranger and instantly retrieve identifying information about them using Meta’s integrated artificial intelligence assistant. Sources familiar with the project suggest that Meta engineers have explored two primary iterations of the software: one designed to identify individuals with whom the user is already connected on Meta’s social media platforms, and a significantly more invasive version capable of identifying anyone with a public Facebook or Instagram profile.
The pushback from civil rights organizations highlights a growing tension between the rapid advancement of wearable "ambient computing" and the fundamental right to anonymity in public spaces. The coalition argues that the introduction of such a feature would effectively end the concept of public privacy, turning every wearer of the glasses into a mobile surveillance node capable of unmasking strangers without their knowledge or consent.

The Technical Mechanics of Name Tag
The "Name Tag" feature represents a significant escalation in the capabilities of consumer-grade wearables. While current smart glasses are primarily marketed as tools for hands-free photography, video recording, and audio streaming, the integration of real-time facial recognition would transform them into sophisticated identification tools.
The hardware currently powering the Meta Ray-Ban glasses includes a 12MP ultra-wide camera and the Qualcomm Snapdragon AR1 Gen 1 platform, which is specifically designed to handle on-device AI processing. By pairing this hardware with Meta’s massive database of billions of user-uploaded images, the company is uniquely positioned to deploy facial recognition at a scale that few other entities could match.
The coalition’s letter emphasizes that no amount of design adjustments—such as a blinking recording light or an opt-out mechanism—would be sufficient to protect the public. Unlike social media platforms where users can choose what to share, bystanders in the physical world have no way to "opt out" of being scanned by a stranger’s eyewear. This creates a power imbalance where the wearer holds the digital identity of everyone they encounter, potentially exposing sensitive information such as a person’s name, workplace, or social history in seconds.
A Chronology of Surveillance Concerns
The development of the Name Tag feature follows a period of heightened scrutiny regarding Meta’s privacy practices. In May 2025, an internal memo leaked from within Meta revealed a strategic plan to launch controversial features during what the company described as a “dynamic political environment.” According to reports by the New York Times, the memo suggested that the company could take advantage of periods when civil society groups and regulators were distracted by major political events to roll out sensitive technologies with less pushback.

The coalition of rights groups has characterized this strategy as "vile behavior," arguing that it demonstrates a calculated attempt to bypass public debate and democratic oversight. This is not the first time Meta has faced backlash over biometric data. In 2021, following years of legal challenges and a $650 million settlement in Illinois over biometric privacy violations, Meta (then Facebook) announced it would shut down its Face Recognition system and delete the "faceprints" of more than one billion people. The potential revival of this technology in a wearable format is seen by critics as a reversal of that commitment.
Furthermore, an investigation conducted earlier this year revealed that Meta’s smart glasses were already transmitting video recordings of users’ private moments to the company’s servers to train its AI models. Workers involved in the training process reportedly viewed footage of intimate settings, raising alarms about the extent to which the glasses are constantly monitoring their environment.
Risks to Vulnerable Populations
The open letter to Mark Zuckerberg highlights specific dangers posed to marginalized and vulnerable communities. Domestic violence organizations point out that facial recognition glasses could be a "dream tool" for stalkers and abusers, allowing them to track victims or identify people in safe houses. Reproductive rights groups expressed concern that the technology could be used to identify and harass individuals entering healthcare clinics, while LGBTQ+ advocates warned of the potential for "deadnaming" or outing individuals against their will in public spaces.
The coalition also raised the specter of government and law enforcement abuse. If the technology becomes ubiquitous, there are fears that federal agencies could compel Meta to provide access to the real-time data streams from millions of pairs of glasses, effectively creating a decentralized, crowdsourced surveillance network that covers every corner of urban life.

Official Responses and Corporate Strategy
In response to the growing outcry, a spokesperson for Meta stated that the company does not currently offer a facial recognition feature on its smart glasses and that it would take a “very thoughtful approach” before implementing any such technology. The company has frequently emphasized its commitment to "privacy by design," pointing to the small LED light on the glasses that illuminates when the camera is active.
However, privacy experts argue that the LED is an insufficient safeguard. In 2024, researchers at Harvard University demonstrated how Meta’s smart glasses could be modified to run facial recognition software developed by third parties, successfully identifying strangers on a college campus and retrieving their home addresses and phone numbers. The researchers noted that bystanders often ignored the small light, and even those who noticed it had no way of knowing their biometric data was being processed in real-time.
While Meta hesitates, its competitors are also moving into the smart eyewear space. Google has recently announced a partnership with the luxury brand Gucci to launch AI-powered glasses in 2027, and Apple is rumored to be exploring similar technology for future iterations of its wearable lineup. The industry is racing toward a future where "ambient AI" is the standard, yet the regulatory framework governing how these devices interact with the public remains largely non-existent.
Supporting Data on Biometric Privacy
The debate over Meta’s glasses takes place against a backdrop of increasing regulation regarding biometric data. According to data from the International Association of Privacy Professionals (IAPP), more than a dozen U.S. states have introduced or passed legislation regulating the collection of biometric identifiers. The Biometric Information Privacy Act (BIPA) in Illinois remains the gold standard, requiring companies to obtain written consent before collecting a person’s facial geometry.

Globally, the European Union’s AI Act has established strict rules for the use of facial recognition in public spaces, categorizing it as a "high-risk" application. If Meta were to roll out Name Tag globally, it would likely face significant legal hurdles in jurisdictions with strong privacy protections, potentially leading to a fragmented user experience where features are enabled in some regions but disabled in others.
Studies on facial recognition accuracy also provide reason for caution. Data from the National Institute of Standards and Technology (NIST) has consistently shown that facial recognition algorithms often exhibit higher error rates when identifying people of color, women, and the elderly. The deployment of such technology in a consumer product could lead to frequent misidentifications, resulting in social friction or even dangerous confrontations in real-world settings.
Broader Impact and the Future of Public Anonymity
The outcome of this standoff between Meta and civil rights groups will likely set a precedent for the entire wearable technology industry. If Meta successfully integrates facial recognition into its glasses, it will signal the beginning of an era where public anonymity is no longer the default state of human interaction.
Analysts suggest that the long-term goal for tech companies is to replace the smartphone with "head-worn" computers that overlay digital information onto the physical world. While this offers conveniences—such as instant translations, navigation, and hands-free communication—it also requires the device to be "always-on" and "always-seeing."

The coalition’s demand represents a fundamental question: Is the convenience of an AI-powered "Name Tag" worth the loss of public privacy? For the 70 organizations that signed the letter, the answer is a resounding no. They argue that some technologies are inherently incompatible with a free society, and that facial recognition in a wearable format poses a "unique and irreparable threat" to the fabric of social life.
As Meta weighs its next move, the company faces a choice between pushing the boundaries of its AI capabilities and respecting the mounting concerns of privacy advocates. With the "dynamic political environment" of 2025 unfolding, the world will be watching to see if the tech giant chooses to prioritize its technological ambitions or the safety and privacy of the global public.



