Smart glasses or spy glasses: Meta may let people see too much
Imagine: you are sitting at a café or waiting for a bus. A person comes up to you, addresses you by your name, shakes your hand, and excitedly tells you that they recognized you from the work you do, or from your involvement with some other activity or a hobby of yours. Wouldn’t that put a smile on your face? Who doesn’t want to feel like a celebrity, even if for a brief moment? But wait — that person is wearing glasses, and that changes everything.
This is exactly what happened to Khasif Hoda who unknowingly became a star of the viral experiment, in which he, along with many others, was recorded and identified in real time with the help of Ray-Ban Meta smart glasses. The man wearing the glasses was AnhPhu Nguyen, one of the creators of the I-XRAY — the system behind the experiment. When the glasses detected a face, that footage was immediately fed into an AI program that would scour the internet to find more pictures of that person. Then, the program took advantage of data sources like online articles and voter registration databases to determine the person’s personal details like name, phone number, and even home address and relatives’ names. This information was then sent back to an app on Nguyen’s phone — all within seconds.

Photo credit: Josh Edelson
The developers of the I-XRAY system did not create it to stalk people. Quite the opposite, their goal is to raise awareness and to demonstrate the capabilities of smart glasses and how, combined with LLMs, public databases, and face search engines, they may be misused in malicious ways. In fact, they even provide a guide to remove your information from data sources used to power this technology.
Smart glasses are not facial recognition tools… yet
It’s worth noting that while smart glasses can be used to perform facial analysis like in the experiment above, they currently lack the processing power to conduct facial analysis in real time on their own. But experts believe it is just a matter of time until they can and that it will have far-going consequences. According to some reports, Meta already has plans to integrate real-time facial recognition technology into their smart glasses. Privacy advocates are already sounding the alarm: on April 13th, over 75 advocacy organizations published an open letter addressed to Mark Zuckerberg, in which they warn about the dangers of building facial recognition technology into common consumer items like glasses. Their concern largely revolves around the potential misuse of the tech and how it opens the door to harassment, stalking, and fraud, especially against marginalized and vulnerable groups like girls and women, immigrants, or political activists. But the experts emphasize that anyone can be at risk. For example, real-time facial recognition could be weaponized by scammers to identify and track their victims in various scam schemes.
Legal gray area
While Meta glasses are not capable of identifying faces in real time, they still draw criticism for allowing the owner to record people without their knowledge. This is a tricky matter, as in many cases it is indeed legal to record people in public places without their consent. However, the legality of the act depends heavily on the country, on whether audio is recorded or not, and on what you are doing with the footage. There is still plenty of room for using the smart glasses in a malicious way. Responding to critics, Meta points to its older statements, saying that per terms of service, “users are responsible for complying with all applicable laws and for using Ray-Ban Meta glasses in a safe, respectful manner.” This sounds great on paper, but ‘paper promises’ like this hardly stop anyone who doesn’t mean well in the first place.
It’s also worth noting that there has been some positive progress in developing new laws addressing these scenarios. Back in February in California, a bill was introduced to specifically prohibit secret recordings with wearable devices in business spaces. The bill has already passed two hearings and is currently set for the next one on May 4. In certain locations, like courtrooms in Philadelphia, smart glasses are already outright banned. The legislation around wearable smart tech is still in its infancy, though, and the advancement in technology so far seems to outpace the legal work that should accompany it.
Meta also emphasizes that their glasses have a built-in LED light that indicates whenever the device is recording, and that they are designed to detect and prevent any attempts to tamper with it. However, in practice, people have already successfully managed to cover, remove, or otherwise render the LED indicator useless, so it can’t be considered a good enough safety measure.
In the end, the conversation should not be centered around the question of “Is it ok to use smart tech to record and surveil other people?” Because many, including us, will agree that the answer is “no,” and also that ethics alone have never stopped the would-be-perpetrators. The much more important question is, should this tech exist at all, at least in the form of a consumer product? It seems pretty clear that the existing legislative framework is not yet suited to deal with the onslaught of privacy violations that will likely come with the advancement of instant facial recognition technology.







