Mike Elgan
Contributing Columnist

Why you should want face-recognition glasses

opinion
Oct 11, 20247 mins

Never mind that recent project by two Harvard students showing how face-recognition glasses are a huge invasion of privacy. The glasses aren’t where danger lies!

facial recognition - biometric security identification - binary face
Credit: Thinkstock

You start a conversation with someone, wondering if you’ve met them before. But then, your smart glasses tell you their name and where, when, and in what context you actually did meet them.

Is this a helpful, powerful way to avoid offense, reconnect, and gain context about the people you encounter? 

Or is it an outrageous or dangerous invasion of the other person’s privacy? 

The Privacy Project

Two Harvard engineering students, AnhPhu Nguyen, and Caine Ardayfio, recently demonstrated and published a paper and a YouTube video about an experimental project they call I-XRAY.

The I-XRAY “system” is a kludge that starts with Ray-Ban Meta glasses streaming live video to Instagram, a standard, out-of-the-box feature of that product.

A computer watches the Instagram stream remotely. When specially written software detects a person’s face in the stream for a few seconds, the face is captured as a screenshot. 

The software automatically uploads the screenshot of the face to a site called PimEyes, which I wrote about in 2017. PimEyes is a face-recognition service that recognizes a face and shows you where else the same person’s face is posted on the internet. 

The software captures and opens the URLs provided by PimEyes and then scrapes the textual data on those sites. 

The scraped data is processed with a large language model (LLM), which sifts through the text to extract the person’s name, company name, or any other personally identifiable information (it discards non-personal information). 

Then, the I-XRAY software uses personal data to search on a people search site. By entering any basic personal data point about someone — their name, address, phone number, or email address — the people search sites show you the other data points, plus age, relatives, work history, etc.

The personal information gathered is then sent in text form back to the smartphone of the person wearing the glasses, giving them knowledge about a stranger without that person’s permission.

The experiment’s creators presented it as a warning, a cautionary demonstration showing the future danger to privacy posed by AI glasses.

Commentators are also sounding alarms about the demonstration videos posted by the I-XRAY guys, saying that they reveal AI glasses to be creepy, dystopic, and dangerous.

But this take by the I-XRAY creators and commentators is simplistic, misleading, and wrong. 

Why I-XRAY isn’t about AI glasses

The title of the I-XRAY paper is headlined with a patently false claim, that the researchers demonstrated “AI glasses that can reveal anyone’s personal details.”

It “feels” like that in their video. In that captured demonstration, one of the creators walks up to a stranger and, just by looking at them, learns their name, plus other personal information such as their occupation or relatives. 

What’s actually true is that the “AI glasses” serve no purpose in this scheme other than to take a picture using the embedded camera. 

In fact, without any other modifications to the I-XRAY backend system, the students could stream live video to Instagram with their smartphone rather than the Ray-Ban Meta glasses wirelessly connected to their smartphone, and the result would be the same. (Actually, the smartphone would work better because the video would have a higher resolution.)

So why imply the glasses pose some special risk?

At least the glasses have a bright white light that shows others a video is being captured. Anyone could use a telephoto lens or even the zoom feature on their smartphone to capture a picture of someone from far away without their knowledge.

To be clear, the online services they used to enable this outcome are clear and present dangers to your privacy. Specifically, Pimeyes and Facecheck ID can find all the places your picture is posted online and give it to anyone who can upload a photo, either downloaded or captured with a camera. 

The database sites they used, including FastPeopleSearch and Instant Checkmate, are also a clear and present danger to your privacy. Your personal information is already in their database, waiting for anyone with an internet connection to add any bit of your personal data to receive back a lot more of your personal data. 

Anyone can upload any photo of their face to PimEyes, for example, at any time. The site will return all the URLs where your picture is also available. The person can then grab the other data on those pages, especially your name. 

With your name in hand, they can get a massive amount of personal and financial information about you from the other sites. 

But here’s the thing: Using a camera that happens to be built into glasses isn’t a special privacy risk compared with any other camera. It’s just a camera. 

If you want to protect your privacy, you need to visit each of the websites above one by one and use their tools for opting out. 

Opposition to AI glasses will have zero effect. To protect your privacy from the photo-generation side, you would have to advocate for the abolishment of AI glasses — and smartphones, DSLRs, polaroids, webcams, doorbell cams and every other object capable of capturing a photograph or video. 

I object to how the I-XRAY project has been presented by the creators and received by the public. The risk lies not with the glasses but with the face recognition sites and the public personal data sites. Glasses have nothing to do with it.

Instead, we should demand face-recognition features in our AI glasses. 

Why face-recognition glasses are good

Business cards originated in 15th-century China as social calling cards for the aristocracy. They spread to Europe in the 18th Century as “visiting cards.” But when the Industrial Revolution reached full steam, business cards became the standard tool for exchanging contact information. 

Now, we exchange that kind of information electronically. For example, we can share business cards from Apple Wallet or Google Wallet as passes, which (thanks to the vCard format) can go right into our Contact app to live among the business contacts we’ve been collecting throughout our careers. 

Under the right circumstances (both parties using iPhones running iOS 17 or later), we can easily exchange business card-type information by simply holding the two phones near each other. 

To give a business card is to grant permission for the receiver to possess the personal information thereon.

It would be trivial to add a small bit of code to grant permission for face recognition. Each user could grant that permission with a checkbox in the contacts app. That permission would automatically share both the permission and a profile photo. 

Face-recognition permission should be grantable and revokable at any time on a person-by-person basis. 

Ten years from now (when most everyone will be wearing AI glasses), you could be alerted at conferences and other business events about everyone you’ve met before, complete with their name, occupation, and history of interaction. 

Collecting such data throughout one’s life on family and friends would also be a huge benefit to older people suffering from age-related dementia or just from a naturally failing memory. 

Shaming AI glasses as a face-recognition privacy risk is the wrong tactic, especially when the glasses are being used only a camera. Instead, we should recognize that permission-based face-recognition features in AI glasses would radically improve our careers and lives. 

Exit mobile version