According to a journalistic investigation published in early March 2026, AI smart glasses from Meta (Ray-Ban and Oakley) transmitted sensitive video recordings for review by human data annotators in Nairobi, Kenya. Workers reported that the materials they reviewed and labeled for AI training included footage from bathrooms, nude individuals, and intimate scenes from users' private lives. The publication states that faces in the recordings should be automatically blurred, but according to the annotators, this system did not always work correctly, and sometimes not only faces but also, for example, the glasses owners' bank cards were visible.
This incident exposed a fundamental privacy problem in devices with constant access to a camera and microphone. Meta glasses, created in partnership with EssilorLuxottica, are equipped with a built-in AI assistant that can answer questions about what the user sees. Their sales grew rapidly: in 2025, over 7 million pairs were sold, more than triple the combined sales for 2023 and 2024. However, the device's popularity clashed with growing criticism over surveillance and privacy issues.
Technically, the process worked as follows: when a user activated the "Hey Meta" function or asked the assistant a question, the glasses could take a photo or a short video for analysis. Some of this media data, including transcripts of voice queries, was sent to annotators in Kenya. Their task was to label the content to improve the accuracy of the AI's responses. In 2025, Meta changed its privacy policy, making the camera for the AI enabled by default (until "Hey Meta" is turned off) and removing users' ability to opt out of storing their voice queries in the cloud. This increased the volume of data potentially available for review.
The reaction to the investigation was immediate. At least one class-action lawsuit has already been filed, accusing Meta of violating false advertising and privacy laws. The lawsuit points to a contradiction between Meta's claims that the glasses are "designed to protect privacy" and the actual practice of transmitting intimate recordings to third-party reviewers. Meta spokesperson Tracy Clayton, in a comment to a tech publication, stated that media captured by the glasses is used to improve AI functions, and the company employs "a range of measures to protect people's privacy," including automatic face blurring. However, the annotators' testimonies cast doubt on the effectiveness of these measures.
For the AI wearable device industry, this case is a warning signal. It demonstrates the risks inherent in the "training AI on user data" model, especially when a device is constantly on a person and can be activated inconspicuously. For users, it is a direct indication that their most private moments can become part of a training dataset reviewed by low-paid workers worldwide, and privacy guarantees in advertising may not reflect actual processes.
Future prospects now depend on the outcome of the legal proceedings and regulators' response. Meta may be forced to revise its annotation processes, implement stricter technical limitations on data collection, or provide users with more transparent and granular control. A key question remains open: Is it even possible in principle to create a convenient, responsive device with "sight" and "hearing" that will unconditionally respect its owner's privacy, or are these goals incompatible within current business models built on data collection?
No comments yet. Be the first!