Back to Articles

Meta hit with a class action lawsuit over smart glasses' privacy claims

Meta hit with a class action lawsuit over smart glasses' privacy claims

Key Takeaways

  1. 1Meta faces a class-action lawsuit for allegedly misleading users on smart glasses' privacy.
  2. 2The suit claims human contractors review sensitive user footage for AI training.
  3. 3Plaintiffs in California and New Jersey allege they wouldn't have purchased glasses if aware.
  4. 4The UK's Information Commissioner's Office (ICO) has also raised transparency concerns.

Meta is facing a new class-action lawsuit alleging false advertising regarding the privacy features of its Ray-Ban Meta smart glasses. The lawsuit claims the company misled users by failing to disclose that human contractors review sensitive footage, including intimate moments, captured by the devices to train Meta's AI models. This legal challenge underscores growing concerns about data handling in always-on wearable tech.

The Lawsuit: "Surveillance Conduit" Allegations

A class-action lawsuit, filed Wednesday in federal court in San Francisco, accuses Meta of "affirmatively false advertising" concerning the privacy protections of its AI-powered smart glasses. The complaint follows reports that subcontractors in Kenya were tasked with reviewing footage captured by users' glasses, reportedly including highly personal material such as bathroom visits, sexual encounters, and other private details. These workers, according to a Swedish newspaper report, were part of a data labeling operation designed to help train Meta's artificial intelligence models.

The lawsuit, brought by Clarkson Law Firm, names two individuals from California and New Jersey who purchased the smart glasses. They assert they relied on Meta's marketing claims about privacy and would not have bought the devices had they known about the involvement of human contractors in reviewing footage. The plaintiffs are seeking monetary damages and injunctive relief, aiming to compel Meta to change its practices and disclosures.

What This Means For You

1

For Developers Building AI Hardware

Scrutinize your data pipeline architecture, especially for edge devices with constant recording capabilities. Transparency isn't just good PR; it's a legal imperative. Clearly defining what data is collected, how it's processed, and who accesses it can prevent future lawsuits, particularly when multimodal AI features implicitly share user environments. For Founders in Wearable Tech: Prioritize robust, explicit user consent for any form of human review or third-party data access. The fact that California and New Jersey residents are plaintiffs suggests consumers are increasingly aware of privacy implications, pushing companies to go beyond generic privacy policy language. For Consumers of Smart Devices: Recognize that "improving user experience" or "training AI" often involves human oversight of your data. Understand the explicit and implicit data sharing mechanisms of any device that captures your surroundings, especially if it connects to cloud AI services. For Investors in AI Companies: Evaluate companies' legal and regulatory compliance, particularly their data governance policies. Ongoing lawsuits like this, and the separate Android tracking complaint from June 2025, highlight potential long-term liabilities and reputational risks associated with privacy missteps. Research Sources techcrunch.com bbc.com

FAQ

Meta is facing a class-action lawsuit alleging they misled users about the privacy of Ray-Ban Meta smart glasses. The lawsuit claims Meta failed to disclose that human contractors review footage captured by the glasses, including sensitive and private moments, to train Meta's AI models. Plaintiffs claim they would not have purchased the glasses had they known about this practice.

Human contractors are reportedly reviewing a range of footage captured by Ray-Ban Meta smart glasses, including highly personal material. This includes footage of bathroom visits, sexual encounters, and other private details. This data review is part of a data labeling operation to help train Meta's artificial intelligence models.

Meta confirms that data from its smart glasses can be shared with human contractors to improve user experience. They state that while media captured by users generally stays on the device unless explicitly shared, content shared with Meta AI is sometimes reviewed. Meta claims they take steps to filter this data to protect people's privacy and prevent identifying information from being reviewed.

The 'multimodal' features, which allow the AI to interpret a user's surroundings, inherently share captures with Meta, raising privacy concerns. Images of surroundings processed for features like Live AI can be used for training purposes, even if the user doesn't explicitly save those images. This means footage not saved to the device's camera roll can still be sent to Meta.

Newsletter

Stay informed without the noise.

Daily AI updates for builders. No clickbait. Just what matters.