Claude Code auto mode dropped today
Anthropic shipped auto mode for Claude Code on March 24. If you've used Claude Code for more than ten minutes, you know the pain. Every file write, every shell command, every git operation triggers a permission prompt. I've hit "yes" hundreds of times in a single session.
Auto mode fixes that. A safety classifier reviews each tool call before it runs. Safe actions proceed without asking. Destructive stuff like mass file deletions or sketchy shell commands gets blocked, and Claude tries a different approach instead.
We use Claude Code daily at Random Llama for building Next.js apps, Sanity schemas, and API routes. This is a real workflow improvement. The constant permission clicking was the single biggest friction point in longer coding sessions. You'd lose your train of thought every 30 seconds.
One thing to watch: Anthropic recommends using isolated environments even with auto mode on. That's smart. I wouldn't run this against a production repo without a branch safety net, but for feature development it's going to save serious time. Enable it with claude --enable-auto-mode or toggle with Shift+Tab.
GPT-5.4 brings native computer use to the API
OpenAI released GPT-5.4 on March 5 in three variants: Standard, Thinking, and Pro. The headline feature is native computer-use capabilities. This model can operate your desktop, fill forms, manipulate documents, and navigate applications without custom tooling.
The numbers back it up. On GDPval, which tests agent performance across 44 professional occupations, GPT-5.4 scores 83% matching or exceeding industry professionals. That's up from 70.9% on GPT-5.2. Factual errors dropped 33%.
For small software shops, the computer-use capability is the real story. Building agents that can interact with legacy desktop apps, automate repetitive workflows in tools that don't have APIs, or test UIs by actually clicking through them. That used to require brittle screen-scraping setups. Now it's a model capability with a 1M token context window.
The catch is pricing. GPT-5.4 Pro is locked to Pro and Enterprise plans. The Thinking variant is available on Plus, which is more accessible, but the full agentic capabilities need the higher tiers.
DeepSeek V4 keeps the industry guessing
DeepSeek V4 still hasn't launched as of today. Multiple expected release windows have come and gone. But a "V4 Lite" appeared on DeepSeek's website on March 9, which suggests they're doing an incremental rollout.
The specs on paper are wild. One trillion total parameters with 32 billion active per inference pass, using sparse Mixture-of-Experts. MIT-licensed. If the performance claims hold up on independent benchmarks, this would be the most capable open-source model ever released.
Here's the interesting wrinkle. DeepSeek gave early access to domestic Chinese chip makers like Huawei instead of Nvidia and AMD. That's a deliberate play to build out a non-Western AI hardware ecosystem. For builders outside China, it doesn't change much practically since the model weights will be freely available. But it signals where the geopolitics of AI infrastructure are heading.
I'm not holding my breath on V4 until I see independent benchmarks. The V3 release was genuinely impressive and changed how I think about open-source models for production use. If V4 delivers even 80% of what's claimed, it'll be worth switching some workloads over.
OpenAI is printing money and hiring like it's 2021
OpenAI hit $25 billion in annualized revenue and plans to nearly double its workforce to 8,000 employees by year end. They're also reportedly taking early steps toward a public listing, potentially late 2026.
The hiring push is focused on enterprise sales and "technical ambassadors," which tells you where the money is. Consumer ChatGPT is the brand. Enterprise contracts are the business. Every major AI lab is making this same pivot right now.
For small teams like ours, the takeaway is simple. The API pricing war between OpenAI, Anthropic, Google, and the open-source alternatives is only going to intensify. GPT-5.4 already uses significantly fewer tokens than 5.2 for the same tasks. Models are getting cheaper and more capable at the same time. Build your products now while the economics keep improving underneath you.
Three trends are converging this month. AI coding tools are removing friction (Claude auto mode). Model capabilities are expanding into new domains (GPT-5.4 computer use). And open-source is closing the gap fast (DeepSeek V4 on the horizon). All three favor small, fast teams who ship instead of meeting about shipping. Good time to be building.