Random Llama
Random Llama
ProductsSolutionsBlogCase StudiesContact
Get a Quote
Weekly Newsletter

Get AI & productivity insights weekly

Privacy-first tools, workflow tips, and early product access. No spam — unsubscribe anytime.

Random Llama Software

The digital forge for privacy-first tools and high-performance web platforms.

Links
  • Home
  • Products
  • Case Studies
  • Blog
  • Solutions
  • Credentials
  • Contact
Services
  • Custom CMS
  • Booking Engines
  • Mobile Apps
  • AI Integration
Connect
  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 Random Llama Software, LLC. All rights reserved. Privacy Policy

Back to Blog
ai-toolsprivacyopen-source

Perplexity Sued, Anthropic Leaked, and NVIDIA Shipped Open AI Models

Robert HattalaApril 2, 2026

Perplexity Was Sharing Your Chats With Meta and Google

A class-action lawsuit filed yesterday says Perplexity AI installed tracking scripts that sent your conversations to Meta and Google. The complaint claims these trackers activate at login, even in Incognito mode, and transmit chat data in real time for ad targeting.

The case (Doe v. Perplexity AI, N.D. Cal.) alleges violations of California privacy law. Perplexity says they haven't been served yet and can't verify the claims. Meta says sending them sensitive data violates their own advertiser rules.

This matters if you or your clients use Perplexity for anything sensitive. Research queries, business strategy, competitive analysis. All of it potentially piped to the two biggest ad companies on earth. I'd pause any business use of Perplexity until this gets sorted out.

Anthropic Accidentally Published Claude Code's Source

Anthropic shipped the full TypeScript source for Claude Code to npm instead of just the compiled build. About 500,000 lines across 1,900 files. The leak happened because an unobfuscated source map pointed to a zip archive on their Cloudflare storage.

Anthropic called it a "release packaging issue caused by human error." No customer data was exposed. But this is their second leak in days. Last week they accidentally made 3,000 internal files public, including details about an unreleased model.

For builders, the interesting part is what the code reveals about how Claude Code's agentic harness works. Competitors now have a detailed blueprint. For the rest of us, it's a reminder that even safety-focused AI labs make basic DevOps mistakes.

NVIDIA Drops Nemotron 3 Open Models for Agents

NVIDIA released the Nemotron 3 family of open models built on a hybrid Mamba-Transformer architecture. The Nano model is available now on Hugging Face and through providers like Together AI and Fireworks. Super and Ultra models are coming later this year.

Nemotron 3 Nano delivers 4x higher throughput than Nemotron 2 Nano with a 1M-token context window. Training data, weights, and technical reports are all open. NVIDIA also dropped Nemotron 3 Omni for multimodal tasks and VoiceChat for real-time audio.

This is a big deal for anyone building multi-agent systems. High throughput plus open weights means you can run agent swarms without burning through API credits. We've been watching the Nemotron line closely at Random Llama, and the Nano model looks like a strong fit for local agent workflows.

What It All Means

Today's theme is trust. Perplexity broke it by allegedly selling out user privacy. Anthropic broke it by fumbling basic release engineering. NVIDIA is building it by shipping open models with full transparency on training data. The companies that earn developer trust in 2026 will be the ones that ship their work in the open.

Related posts

AI's Wild Week: Code Leaks, Privacy Lawsuits, and Free TTS

Anthropic leaked Claude Code's source for the second time, Perplexity got sued over user data sharing, and Mistral dropped an open-source TTS model that rivals ElevenLabs.

April 1, 2026

AI News: Anthropic's Mythos Leak, LangChain Flaws, and Reddit's Bot War

Anthropic's leaked Mythos model raises cybersecurity alarms, LangChain patches critical vulnerabilities, and Reddit starts labeling bots today.

March 31, 2026

Anthropic's Leaked Mythos Model, Reddit's Bot Crackdown, and AI Security Holes

Anthropic accidentally exposed details of a model that outclasses everything they've shipped. Reddit starts labeling bots today. And LangChain has security holes.

March 31, 2026
All posts