Random Llama
Random Llama
ProductsSolutionsBlogCase StudyContact
Get a Quote
Weekly Newsletter

Get AI & productivity insights weekly

Privacy-first tools, workflow tips, and early product access. No spam — unsubscribe anytime.

Random Llama Software

The digital forge for privacy-first tools and high-performance web platforms.

Links
  • Home
  • Products
  • Case Study
  • Blog
  • Solutions
  • Credentials
  • Contact
Services
  • Custom CMS
  • Booking Engines
  • Mobile Apps
  • AI Integration
Connect
  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 Random Llama Software, LLC. All rights reserved. Privacy Policy

Back to Blog
ai-toolsai-securityopenaiamazonhealthcare-aicybersecurity

OpenAI Ships Teen Safety Prompts, Shadow AI Agents Hit 80% of Orgs

Robert HattalaMarch 24, 2026

OpenAI Open-Sources Teen Safety Prompts

After a year of fielding lawsuits from families of young people who died after extended ChatGPT interactions, OpenAI is releasing a set of open-source safety prompts specifically designed to protect teens. They're calling it part of their Teen Safety Blueprint.

The prompts cover graphic violence and sexual content, harmful body ideals, dangerous challenges, romantic or violent roleplay, and age-restricted goods and services. They worked with Common Sense Media and everyone.ai to write them, and the prompts are designed to work with their open-weight safety model gpt-oss-safeguard.

OpenAI was upfront that this isn't a comprehensive solution. They called it a "meaningful safety floor" rather than the full extent of what they apply to their own products. Fair enough. The fact that these are open-source prompts means any developer can grab them and bolt them onto their own apps. Whether they actually will is a different question.

Shadow AI Agents Found in 80% of Organizations

Nudge Security just launched an AI agent discovery tool, and the timing couldn't be better. According to a recent Sailpoint survey, 80% of organizations say they've already encountered agentic AI risks, including improper data exposure and unauthorized system access.

The problem is straightforward. Employees are spinning up custom AI agents on platforms like Microsoft Copilot Studio and n8n, granting them broad access to corporate data and tools, and nobody in security knows about it. Nudge's new tool gives security teams visibility into these shadow agents, their permissions, and their connections to corporate data.

This adds to their existing AI security stack which already covers MCP server connection discovery, AI data flow visualization, and sensitive data sharing detection. Agentic AI is now the fastest-growing enterprise tech priority and the top security concern for nearly half of security professionals. Those two facts living side by side tells you everything about where we are right now.

Amazon Health AI Rolls Out to 200M Prime Members

Amazon is pushing Health AI to all U.S. customers through its website and app after initially launching it in the One Medical app back in January. The scale here is wild. We're talking about 200 million Prime members getting access to an AI health assistant.

What caught my eye is the architecture. This is a four-layer multi-agent system. You've got a core agent talking to patients, sub-agents handling specific workflows, auditor agents reviewing conversations in real time, and sentinel agents standing watch over the whole thing with escalation paths to human providers. That layered approach is designed to catch errors before they reach the patient.

The whole thing runs on Amazon Bedrock so they can swap models depending on the task. As an intro offer, eligible Prime members get up to five free direct-message care visits with a One Medical provider for 30-plus common conditions. That's up to $145 in free health services. Amazon is clearly betting that once people try AI-assisted healthcare, they won't want to go back.

Langflow RCE Exploited in Under 20 Hours

Here's a fun one. CVE-2026-33017 is a critical unauthenticated remote code execution vulnerability in Langflow, the open-source visual framework for building AI agents. The vulnerability sits in the public flow build endpoint. Attackers can execute arbitrary Python code on any exposed instance with zero credentials required.

The scary part: exploitation attempts showed up within 20 hours of the advisory going public. There wasn't even a public proof-of-concept at the time. Attackers built working exploits directly from reading the advisory description and started scanning for vulnerable instances.

Exfiltrated data included keys and credentials that gave access to connected databases and opened up potential software supply chain compromise. This is the second critical RCE in Langflow after CVE-2025-3248 last year. If you're running Langflow in production, patch immediately. If you're exposing it to the internet without auth, well, you probably already have a problem.

Related posts

OpenAI Kills Sora, Arm Ships Its First Chip Ever, and LiteLLM Gets Hacked

OpenAI shut down Sora and torpedoed a $1B Disney deal. Arm built its first CPU in 35 years. And a supply chain attack hit 97 million downloads.

March 25, 2026

Welcome to the Random Llama Software Blog

We're launching our blog to share insights on AI tools, privacy-first development, productivity software, and the lessons learned building Windows apps and full-stack platforms from Richmond, TX.

March 24, 2026

Claude Code gets auto mode, GPT-5.4 ships native computer use, and DeepSeek V4 is still MIA

Anthropic just dropped auto mode for Claude Code, OpenAI's GPT-5.4 can operate your entire desktop, and DeepSeek V4 keeps teasing a trillion parameters. Here's what actually matters for builders.

March 24, 2026
All posts