News

Meta Ray-Ban AI Glasses under scrutiny as Kenyan workers admit reviewing intimate user footage, including sex & undressing

Employees at the Kenyan tech firm Sama expose the hidden human workforce behind Meta's AI, where contractors are forced to review deeply private moments recorded by users who thought their cameras were off.

A sweeping joint investigation published late last month by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten has exposed a massive, hidden privacy scandal within Meta’s smart glasses ecosystem. Kenyan data annotators, employed as subcontractors in Nairobi, have revealed they are routinely required to manually review extremely intimate, unanonymized footage captured by unsuspecting users of the Meta Ray-Ban AI glasses.

The disturbing testimonies describe a workforce uneasy about peering into the most private moments of users’ lives, including bathroom visits, people undressing, watching pornography, and explicitly filmed sex acts.

When users stop wearing the glasses, the glasses keep recording

The core of the problem, according to the contractors, is a catastrophic misunderstanding of how the product functions. Users often do not realize that the AI assistant and the associated cameras remain active even when the glasses are removed from the face.

“In some videos, you can see someone going to the toilet or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording,” one worker told the Swedish investigators, speaking on condition of anonymity due to strict confidentiality agreements.

The report highlights a scenario where a user might finish their day, set the glasses down on a nightstand, bathroom counter, or living room table, and proceed to engage in private activities while the device continues to capture audio and video. One annotator recounted seeing a man leave a room, only for his wife to enter moments later and change clothes, completely unaware she was being filmed.

Nairobi is the engine room of AI

The investigation traces this hidden stream of privacy-sensitive data straight from Western homes to an indistinct hotel in Nairobi. Here, the Swedish journalists met with thousands of data annotators employed by Sama, a major subcontractor to Meta and a familiar name in Kenyan tech labour disputes.

While these workers are tasked with training the AI by labeling everyday objects such as flower pots, traffic signs, and cars, they are also forced to review the human side of the data collection. They described watching private videos where bank cards were visible by mistake and translating texts where users described graphic sexual desires.

“We see everything – from living rooms to naked bodies. Meta has that type of content in its databases,” an employee said.

Meta-Ray-Ban-smart-glasses

They described the workplace as intensely monitored, with cameras everywhere and a total ban on personal smartphones, likely to prevent these explosive clips from leaking. The workers feel trapped, balancing their moral discomfort against the need for employment. “If you start asking questions, you are gone,” one said.

Burying the human element in the ToS

A critical revelation in the report is how Meta allegedly buries the reality of human review from its millions of customers. The investigation bought a pair of glasses in Sweden and found that salespeople consistently misinformed them about data privacy, often telling customers that everything stays locally in the app and nothing is shared with Meta.

This is fundamentally incorrect. The Swedish investigation confirmed that the glasses require data, including voice recordings, images, and video, to be processed via Meta’s servers. Users cannot utilize the AI assistant features without agreeing to this mandatory data harvesting.

Furthermore, while Meta’s privacy policy emphasizes user control, its separate Terms of Use for Meta’s AI Services, a document few read, explicitly states that “in some cases, Meta will review your interactions with AIs… and this review can be automated or manual (human).”

The report indicates that automated anonymization tools frequently fail. Contractors in Nairobi told the Swedish papers that the faces of people other than the wearer are sometimes visible, particularly in difficult lighting conditions.

The rise of surveillance tech

This new report comes as sales of Meta Ray-Ban glasses are booming, manufactured in collaboration with the eyewear giant EssilorLuxottica. After selling just two million smart glasses combined in 2023 and 2024, sales reportedly tripled to seven million units in 2025 alone.

This staggering increase in adoption, combined with the mandatory data harvesting, has raised alarms among privacy advocates. Data protection lawyer Kleanthi Sardeli, from the non-profit None Of Your Business (NOYB), said there is a “clear transparency problem” regarding the AI assistant’s recording triggers. She argues that GDPR rules would require explicit consent if such data is used for training.

Petter Flink, a security specialist at the Swedish Authority for Privacy Protection, added that users have “really no idea what is happening behind the scenes.” He argued that the data Meta collects, the precise details of users’ daily lives, is more valuable to the company than the profit from selling the glasses themselves.

Big Tech’s checkered history in Kenya

This is not the first time Meta’s operations in Kenya have faced scrutiny. We have all read about the trauma faced by Kenyan content moderators employed to review graphic violence, hate speech, and exploitation for Facebook and OpenAI. Those workers filed landmark lawsuits in Kenyan courts alleging poor working conditions, psychological trauma, and union-busting.

Ray-Ban-Meta-Skyler-RW4010-Smart-Glasses-Gen-1

While the previous cases focused on harm to Kenyan workers who reviewed toxic content, this latest report highlights harm to user privacy globally. It exposes the massive human labour engine that remains hidden beneath the glossy marketing of automated AI.

The report also follows the recent outrage surrounding a Russian content creator accused of secretly filming women in Kenya and Ghana using smart glasses. The Swedish investigation proves that even when users are wearing the glasses, Meta’s hidden workforce might be watching what they never intended to show.

The Swedish newspapers reached out to Meta repeatedly over two months for an interview. The company responded with a vague email from a spokesperson in London referring to its terms of service and denying specific filter questions.

One Kenyan annotator perhaps summed it up best: “You think that if they knew about the extent of the data collection, no one would dare to use the glasses.”


Techish Kenya would like to give full credit to Svenska Dagbladet and Göteborgs-Posten for their extensive investigative reporting on this matter.

Join Telegram!

Hillary Keverenge

Making tech news helpful, and sometimes a little heated. Got any tips or suggestions? Send them to hillary@tech-ish.com.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button