Let's get into it.
Snap Lays Off 1,000, Says AI Can Do the Work
Evan Spiegel told his company this week that about 1,000 folks are out the door, plus another 300 open roles getting closed. That's roughly a quarter of the planned headcount, gone. The reason he gave was "rapid advancements in artificial intelligence" letting smaller teams ship the same output.
This matters because Snap is not the first and it won't be the last. We're watching executives figure out they can run leaner, and they're using AI as the public reason even when the real story might be sluggish ad revenue or a bloated org chart.
My take: AI is doing real work now, no question. But pinning a layoff announcement on it is becoming the new "restructuring for efficiencies." Some of those jobs weren't replaced by a model. They were replaced by a spreadsheet and a tough quarter. Workers deserve a straighter answer than "the robots did it."
Amazon Wants to Be Your Doctor
Amazon launched a Health AI agent inside its app and website. Prime members get free 24/7 access to ask health questions, get lab results explained, renew prescriptions, and book appointments. It claims to handle around 30 common conditions and hand you off to a real provider through direct message.
Why this matters: getting a doctor on the phone in America is a minor miracle. A free agent that can triage a sore throat at 2am and tell you whether to worry is genuinely useful. It's also Amazon wiring health data into the same account that knows your shopping habits, which ought to make anybody pause.
My take: I'll probably use it. I'll also probably lie to it about half my answers out of instinct. The real test is whether it pushes folks toward care they actually need or quietly nudges them toward whatever Amazon can sell. If it's the first, great. If it's the second, we've got a problem bigger than a chatbot.
Lawyers Keep Handing in AI Hallucinations
The Nebraska Supreme Court just suspended an Omaha attorney after his appellate brief came in with 20 fake citations cooked up by AI. And courts have stacked up at least $145,000 in sanctions in Q1 2026 alone for lawyers citing cases that do not exist.
Why this matters: we're two-plus years past the first Mata v. Avianca disaster and attorneys are still shipping AI-generated briefs without reading them. This isn't a tech problem anymore. It's a professional laziness problem wearing a tech costume.
My take: if you're a lawyer and you can't be bothered to click one Westlaw link to confirm the case is real, you shouldn't be practicing. The tool isn't the villain. The villain is the person billing a client to not do their job. Judges are finally treating it that way and honest to goodness, it's about time.
The Thread Running Through All of This
Three very different stories, one pattern. AI is forcing every industry to figure out what humans are actually for. Snap is betting fewer people. Amazon is betting agents can extend a service people can't currently get. The courts are betting humans still need to verify the output.
All three bets are going to play out over the next year, and I don't think any of them are settled yet.