Is Otter AI Safe: What You Need to Know

Otter AI faces a class-action lawsuit over privacy violations and secret recordings. Is it safe to use? Discover user complaints, data risks, and alternatives.
John Jeong's avatar
Aug 18, 2025
Is Otter AI Safe: What You Need to Know

A federal class-action lawsuit filed in August 2025 has thrust popular AI transcription service Otter AI into the spotlight, with allegations that the company "deceptively and surreptitiously" records private conversations without proper consent. 

The lawsuit, filed in the U.S. District Court for the Northern District of California, represents a growing pattern of privacy concerns surrounding the Mountain View-based company that has processed more than 1 billion meetings since 2016 for its 25 million users.

The case raises fundamental questions about consent, privacy, and data use in the age of AI-powered workplace tools—questions that extend far beyond a single company to the entire automated transcription industry.

💡

What Are the Key Allegations in the Otter AI Lawsuit?

Source

As noted in coverage from NPR, the lawsuit centers on plaintiff Justin Brewer of San Jacinto, California, who alleges his privacy was "severely invaded" upon realizing Otter was secretly recording a confidential conversation. 

The complaint alleges that Otter Notebook, which can do real-time transcriptions of Zoom, Google Meet, and Microsoft Teams meetings, "by default does not ask meeting attendees for permission to record and fails to alert participants that recordings are shared with Otter to improve its artificial intelligence systems."

Perhaps most concerning is how the system can automatically join meetings. As detailed in the lawsuit: "If the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host." 

The legal filing claims this practice violates both state and federal privacy and wiretap laws, with lawyers arguing that Otter uses these recordings to "derive financial gain" through AI model training. 

The lawsuit also challenges Otter's "de-identification" process, arguing that "Otter's deidentification process does not remove confidential information or guarantee speaker anonymity" and that the company "provides no public explanation of its 'de-identifying' process".

Beyond the courtroom, real users have been sharing their own experiences with Otter's privacy issues. 

These incidents provide concrete examples of the problems outlined in the lawsuit and show how they affect individuals and organizations in practice

1. The VC Meeting Data Leak

Alex Bilzerian, whose tweet went viral, described a particularly damaging incident:

Lior Yaari, CEO and Co-Founder at Grip Security, shared this incident on LinkedIn, adding how a similar incident happened at their company when one of their sales managers sign-up for Otter.  

2. Corporate Security Breaches

Attorney Anessa Allen Santos warned of organizational security risks in a LinkedIn PSA: 

Otter AI security breach warning

3. Otter Spam Warning

A Reddit user in r/projectmanagement posted "Do not join Otter Ai unless you want your whole company spammed," explaining how Otter automatically places links in meetings to share notes with others, causing confusion for those who don't know what it is. 

Otter AI spam policy

4. Interview Incident

One Reddit user shared how they lost a job opportunity: "I am so upset - yesterday I was in an interview, and one of the interviewers asked me if it was being recorded (apparently, Otter AI had joined without my permission or knowledge). To my horror, I found out only after, that I believe it was indeed recorded- and sent out to me and all the interviewers! Otter AI is seriously intrusive and I have no idea who now has access to a private meeting that I ensured all was NOT being recorded."

Otter AI security incident

5. Gartner Community Response

IT professionals on Gartner's peer community have also raised concerns. 

Otter AI review

A VP Information Security Officer commented: "We block otter ai.. Way too much data for a company to have without a contract, security review and relationship." 

A Director of Engineering added: "I think most companies policies are to not have these capabilities enabled until the understand the data risk posed, infrastructure, used for training or not, etc."

What Does Otter's Privacy Policy Explain About These Privacy Issues?

A careful examination of Otter.ai's official privacy policy and privacy & security pages reveals concerning practices that many users may not fully understand. 

While the company emphasizes security and privacy protections, the detailed policies show extensive data collection, usage, and sharing that goes far beyond simple transcription.

1. Data Collection and AI Training

Otter's privacy policy openly admits to using customer data for AI training purposes. According to their official policy: "We train our proprietary artificial intelligence technology on de-identified audio recordings. We also train our technology on transcriptions to provide more accurate services, which may contain Personal Information."

Otter AI data collection and AI training policy

The policy also reveals that Otter conducts manual reviews of recordings when users provide consent: "We obtain explicit permission (e.g. when you rate the transcript quality and check the box to give Otter ai and its third-party service provider(s) permission to access the conversation for training and product improvement purposes) for manual review of specific audio recordings to further refine our model training data."

While their privacy & security FAQ claims that "audio recordings and transcripts are not manually reviewed by a human" as part of the automatic training process, the privacy policy clearly states that manual review occurs when users provide explicit permission, creating potential confusion about when human access to recordings actually happens.

2. Extensive Third-Party Data Sharing

Perhaps most concerning is the breadth of third-party data sharing outlined in Otter's privacy policy. The company shares personal information with multiple categories of external parties, including:

  • "Cloud service providers who we rely on for compute and data storage, including Amazon Web Services, based in the United States"

  • "Data labeling service providers who provide annotation services and use the data we share to create training and evaluation data for Otter's product features"

  • "Artificial intelligence service providers that provide backend support for certain Otter product features"

  • "Platform support providers who help us manage and monitor the Services, including Amplitude, which is based in the U.S. and provides user event data for our Services"

Otter AT data sharing policy

3. The De-identification Problem

While Otter claims to use "a proprietary method to de-identify user data before training our models so that an individual user cannot be identified," the company provides no public explanation of this de-identification process. 

The privacy policy also acknowledges that transcriptions used for training "may contain Personal Information," raising questions about the effectiveness of their anonymization efforts.

4. Automatic Data Collection

The privacy policy reveals extensive automatic data collection beyond just recordings and transcripts, including:

  • "Usage Information: When you use the Services, you generate information pertaining to your use, including timestamps, such as access, record, share, edit and delete events, app use information, screenshots/screen captures taken during the meeting, interactions with our team, and transaction records"

  • "Device Information: We assign a unique user identifier ('UUID') to each mobile device that accesses the Services"

  • "Location Information: When you use the Services, we receive your approximate location information"

Otter AI data collection policy

How Safe is Otter AI: Final Verdict

The privacy concerns surrounding Otter.ai represent broader issues that affect anyone using AI-powered workplace tools, extending far beyond simple transcription mishaps.

  • Regulatory Compliance Risks: Organizations in heavily regulated industries face potential violations when sensitive conversations are automatically recorded and processed by third-party services without proper consent mechanisms or data controls.

  • Enterprise Security Vulnerabilities: Individual employee adoption of cloud-based AI tools can create organization-wide data exposure, especially when these tools share information with multiple external processors and lack enterprise-grade security controls.

  • Professional Liability: Automatic recording features can damage business relationships, compromise confidential discussions, and create unexpected legal exposure when private conversations are captured without all participants' knowledge.

  • The Broader Consent Challenge: The fundamental issue extends beyond any single company—as AI tools become more automated and "helpful," the gap between user expectations and actual data practices widens, creating risks that users may not realize they're accepting.

Is There a Safer Otter AI Alternative?

The mounting privacy concerns around Otter AI highlight the need for transcription solutions that prioritize user privacy and data control. 

For organizations and individuals seeking the benefits of AI-powered transcription without compromising privacy, local-first alternatives offer a compelling solution.

This is where Hyprnote stands out as the best Otter AI alternative

Hyprnote is the only truly local AI notetaker, which means, unlike cloud-based services that upload and analyze your conversations on remote servers, Hyprnote ensures that not a single byte of your meeting data ever leaves your device.

This local-first approach addresses the core issues raised in the Otter AI lawsuit and user complaints:

  • Complete data sovereignty: You maintain full control over when and what gets recorded, with no risk of unauthorized access

  • Zero third-party exposure: Your conversations never leave your device, eliminating data breaches and unauthorized sharing

  • No AI training on your data: Your private conversations aren't used to improve external AI models

  • Simplified compliance: Local processing shrinks the compliance surface area by reducing dependency on cloud vendors, making audits easier, and giving enterprises complete control over data handling

  • Complete transparency: As an open-source solution, you can inspect the code, customize functionality, and verify exactly how your conversations are processed

For enterprises navigating complex compliance frameworks like HIPAA, SOC 2, or GDPR, local processing dramatically reduces risk by keeping sensitive data on-device—not in transit, not in the cloud, and not exposed to third-party vendors. 

This approach makes it easier to fulfill data subject rights, maintain audit trails, and enforce access controls without relying on external processors.

Ready to take control of your meeting privacy? Download Hyprnote for MacOS today and experience truly private, local-first AI transcription that puts you back in control of your conversations.

Share article

Hyprnote