Random Llama
Random Llama
ProductsSolutionsBlogCase StudyContact
Get a Quote
Weekly Newsletter

Get AI & productivity insights weekly

Privacy-first tools, workflow tips, and early product access. No spam — unsubscribe anytime.

Random Llama Software

The digital forge for privacy-first tools and high-performance web platforms.

Links
  • Home
  • Products
  • Case Study
  • Blog
  • Solutions
  • Credentials
  • Contact
Services
  • Custom CMS
  • Booking Engines
  • Mobile Apps
  • AI Integration
Connect
  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 Random Llama Software, LLC. All rights reserved. Privacy Policy

Back to Blog
ai-toolsopen-sourceprivacy

AI's Wild Week: Code Leaks, Privacy Lawsuits, and Free TTS

Robert HattalaApril 1, 2026

Anthropic Leaked Claude Code's Entire Source. Again.

A debug file got bundled into a routine npm update of Claude Code on Monday. That file pointed to a zip on Anthropic's cloud storage containing the full source: nearly 2,000 files, 500,000 lines of TypeScript. Within hours, the repo had 41,500+ forks on GitHub.

This is the second time in just over a year that Anthropic has accidentally shipped Claude Code's internals. The leaked code revealed dozens of unreleased feature flags, including a persistent background assistant, remote control from a phone or browser, and session review tools. Anthropic says no customer data was exposed.

Here's what gets me. The features in those flags sound genuinely useful. A background assistant that keeps working after you close the terminal? Remote control from your phone? That's the kind of stuff small dev shops would pay for tomorrow. The irony is that the leak probably generated more excitement about Claude Code than any launch event could have.

Perplexity AI Sued Over Sharing User Chats With Meta and Google

A class-action complaint filed Tuesday in San Francisco federal court alleges that Perplexity downloads trackers the moment you log in. Those trackers allegedly give Meta and Google full access to your conversations with Perplexity's search engine, even in Incognito mode.

Perplexity denies it. Their spokesperson says they don't share user data with either company and hadn't even been served with the lawsuit yet. But this comes on top of Amazon winning a court order blocking Perplexity's Comet shopping agent for unauthorized access.

If you're building products on top of AI search APIs, this matters. The trust layer between users and AI tools is still thin. One credible privacy scandal and your users start looking for alternatives. For small businesses using Perplexity in their workflow, it's worth watching how this plays out before going deeper.

Mistral's Open-Source TTS Model Beats ElevenLabs

Mistral released Voxtral TTS last week. It's a 4-billion parameter text-to-speech model that supports nine languages. The weights are free on Hugging Face. Human evaluations show it matches or beats ElevenLabs on naturalness, with a time-to-first-audio of 90 milliseconds.

The real kicker: it can clone a voice from less than five seconds of audio. Accents, inflections, speech quirks. All captured. The API runs $0.016 per 1,000 characters if you don't want to self-host.

This is a big deal for anyone building voice features. ElevenLabs charges significantly more, and now there's an open-weight alternative that holds up in quality. If you're a solo dev or small team adding voice to your app, Voxtral just dropped your costs to nearly zero. Self-host it and your only expense is compute.

The Thread Connecting All Three

Every story this week points the same direction. The tools are getting cheaper and more accessible. The code behind the biggest AI products keeps finding its way into the open. And the companies building these tools still haven't figured out the trust basics around privacy and security.

For builders, that gap is the opportunity. Ship things people trust, using tools that cost less every month. That's the game right now.

Related posts

AI News: Anthropic's Mythos Leak, LangChain Flaws, and Reddit's Bot War

Anthropic's leaked Mythos model raises cybersecurity alarms, LangChain patches critical vulnerabilities, and Reddit starts labeling bots today.

March 31, 2026

Anthropic's Leaked Mythos Model, Reddit's Bot Crackdown, and AI Security Holes

Anthropic accidentally exposed details of a model that outclasses everything they've shipped. Reddit starts labeling bots today. And LangChain has security holes.

March 31, 2026

Anthropic's Mythos Leak and Mistral's $830M Bet on AI

Anthropic accidentally revealed its most powerful model yet, Mistral raised $830M for European AI infrastructure, and AI lobbyists are spending big on midterms.

March 30, 2026
All posts