Cursor launched Composer 2 on March 19 and called it "state-of-the-art programming intelligence." A developer found kimi-k2p5-rl-0317 buried in an API response a few days later. Composer 2 was Kimi K2.5 with extra fine-tuning, and Cursor never said that.
Cursor's Composer 2 Was Built on a Chinese Open-Source Model
Moonshot AI, backed by Alibaba, released Kimi K2.5 as an open-weight model earlier this year. Cursor used it as the base for Composer 2, added reinforcement learning on top, and shipped it without attribution.
Cursor's VP of dev education said "only ~1/4 of the compute came from the base." The co-founder called the omission a "miss." That's a polite way of saying they got caught by a developer reading API logs.
This matters beyond the drama. If a $50B company is shipping a top-tier coding assistant on a Chinese open-source foundation without disclosing it, the model provenance question just got serious for enterprise buyers.
Gemini Now Wants Your ChatGPT and Claude Memories
Google shipped an import tool on March 26 that lets you pull chat history and memories from ChatGPT or Claude straight into Gemini. Upload a ZIP or paste a summarization prompt, and Gemini absorbs your context.
The biggest switching cost between AI assistants isn't features. It's losing months of accumulated personalization. Google just removed that friction with one tool.
Anthropic built something similar for Claude about three weeks earlier. The memory portability race is on. For users that's good. For anyone betting on lock-in as a moat, it's a problem.
Gemini 3.1 Flash-Lite Is $0.25 per Million Tokens
Google also released Gemini 3.1 Flash-Lite recently with 2.5x faster response times and a price of $0.25 per million input tokens. For apps making lots of AI calls, that changes your infrastructure math in a real way.
Sub-dollar-per-million-tokens is becoming the default for commodity tasks. We're already routing lighter classification jobs through Flash-class models at Random Llama. The latency and cost both make production sense now.
The Cursor story and the Gemini memory importer are part of the same shift. The model layer is commoditizing fast, open-source foundations are everywhere, and the battle is moving to trust and data portability. Build on the product layer. Model dependencies are not a moat.