March 12, 2026 is marked by three major product announcements: Claude reaches a milestone by generating interactive visuals directly in conversation, OpenAI opens a programmatic Video API powered by Sora 2, and Google Maps integrates Gemini for its deepest redesign in over a decade. Meanwhile, Claude Code receives two updates (v2.1.73 and v2.1.74), Perplexity expands Computer to Pro subscribers, and ElevenLabs launches Flows and Music Finetunes in its creative platform.
Claude creates interactive visuals in conversation
March 12 — Claude can now create interactive charts, diagrams and visualizations directly in conversation, without writing code. The feature is available in beta on all plans, including the free tier.
Originating from the “Imagine with Claude” preview announced last fall, this feature changes how users interact with the assistant: visuals appear inline in responses rather than in a separate side panel. They are temporary — they evolve or disappear as the conversation continues — unlike Artifacts, which are permanent documents intended to be shared or downloaded.
Concrete usage examples: asking how compound interest works generates an interactive curve you can manipulate, requesting the periodic table produces a clickable visualization with details for each element. You can trigger a visual with phrases like “draw this as a diagram” or “visualize how this might change over time”. Claude decides when to create a visual, or the user can explicitly request one.
This feature is part of a series of recent Claude response improvements: dedicated formats for recipes, visual weather, and direct integrations with Figma, Canva and Slack.
“Claude can now build interactive charts and diagrams, directly in the chat. Available today in beta on all plans, including free.” — @claudeai on X
🔗 Claude now creates interactive charts, diagrams and visualizations
Claude Code v2.1.74: context handling and cross-platform fixes
March 12 — Claude Code version 2.1.74 brings improvements to context handling and fixes a series of bugs on Windows and macOS.
New features:
| Feature | Description |
|---|---|
/context improved | Actionable suggestions: identifies heavy tools in context, memory bloat, and capacity warnings with optimization advice |
autoMemoryDirectory | New setting to configure a custom directory for auto-memory storage |
CLAUDE_CODE_SESSIONEND_HOOKS_TIMEOUT_MS | New setting to configure hook timeouts SessionEnd (previously fixed at 1.5 s) |
Notable fixes include: a memory leak in API response buffers in streaming mode that caused unbounded RSS growth on Node.js; managed policies ask could not be overridden by user rules allow; full model IDs (e.g. claude-opus-4-5) silently ignored in agents’ frontmatter model: are now correctly accepted. MCP OAuth fixes cover blocking on a callback port already in use and missing re-authentication after refresh token expiry for connectors like Slack. On macOS, the native binary now includes the entitlement audio-input so macOS correctly displays the microphone permission prompt in voice mode.
Claude Code v2.1.73: stability, Bedrock ARNs and OAuth SSL
March 11 — Version 2.1.73 fixes several important stability issues, including CPU hangs and deadlocks related to skills.
New features:
| Feature | Description |
|---|---|
modelOverrides | New setting to map model selector entries to provider-specific model IDs (e.g. Bedrock inference profile ARNs) |
| Guidance OAuth SSL | Actionable guidance when OAuth connection or connectivity checks fail due to SSL certificate errors (corporate proxies, NODE_EXTRA_CA_CERTS) |
Major fixes: hangs and 100% CPU loops triggered by permission prompts on complex bash commands; a hang that could freeze Claude Code when many skill files changed simultaneously (e.g. git pull in a repo with a large .claude/skills/ folder); sub-agents with model: opus / sonnet / haiku were silently downgraded to older versions on Bedrock, Vertex and Microsoft Foundry.
Ramp AI Index: Anthropic becomes default enterprise choice
March 11 — According to the latest Ramp AI Index report, Anthropic has become the preferred AI provider for companies at their first purchase. The chart shared by Ara Kharazian (lead economist at the Ramp Economics Lab) shows Anthropic’s share of new enterprise customers reaching ~70% in early 2026, versus ~25% for OpenAI — a notable reversal from 2025.
The data comes from more than 50,000 businesses using the Ramp platform (corporate credit card and payments), making it a reliable barometer of real enterprise AI spend. Anthropic’s growth is notably driven by Claude adoption in professional environments (API, Claude for Work, enterprise integrations).
OpenAI Video API: Sora 2 available to developers
March 12 — OpenAI launches the Video API for developers, a programmatic interface to create, extend, modify and manage videos. This capability is powered by Sora 2, OpenAI’s second-generation video generation model.
The Video API exposes two variants: sora-2, designed for speed and exploration (quick iterations, social content, prototypes), and sora-2-pro, aimed at production quality (cinematic outputs, marketing assets, resolutions up to 1920×1080). Both variants support generation durations of 16 to 20 seconds, with possible extension up to 120 seconds total.
Key features available via the POST /videos endpoint include: generation from a text prompt, guidance by a reference image (conditioning the first frame), reusable non-human character consistency across generations (POST /v1/videos/characters), and targeted editing via POST /v1/videos/edits. Processing is asynchronous, with webhook support for render completion notifications. Batch processing via the Batch API is also available for offline render queues.
Content restrictions apply: no depiction of real people, no characters protected by copyright, no adult content (this restriction may evolve later).
Google Maps: Ask Maps and Immersive Navigation
March 12 — Google Maps receives its largest navigation update in more than a decade, powered by Gemini. Two new experiences are announced simultaneously.
Ask Maps is a new conversational experience that lets you ask complex questions about real places. You can ask, for example, “My phone is dying — where can I charge it without queuing for a coffee?” or “Is there a tennis court with lighting available tonight?” The feature relies on data from over 300 million places and reviews from more than 500 million contributors. Responses are personalized based on saved or previously searched places. Ask Maps is rolling out in the United States and India on Android and iOS, with desktop coming later.
Immersive Navigation transforms driving with a 3D view that reflects surrounding buildings, bridges and terrain. Gemini analyzes Street View images and aerial photos to display critical details: lanes, crosswalks, traffic lights, stop signs. The feature also offers natural voice guidance (e.g., “Take this exit and then the next one for Illinois 43 South”), information about trade-offs between alternative routes (tolls vs traffic), and real-time disruption alerts. Immersive Navigation is rolling out today in the United States on eligible iOS and Android devices, CarPlay, Android Auto and cars with Google built-in.
🔗 Ask Maps and Immersive Navigation: New AI features in Google Maps
GitHub Copilot: GA auto model selection in JetBrains
March 12 — GitHub announced the general availability of auto model selection for GitHub Copilot across all JetBrains development environments (IntelliJ IDEA, PyCharm, WebStorm, etc.), for all Copilot subscriptions.
The “Auto” mode dynamically selects the most suitable model for the task, taking throughput limits into account. Developers retain full visibility: hovering over a response shows which model was used, and they can switch to a specific model at any time. Billing follows the model actually selected, with multipliers currently between 0x and 1x.
GitHub says auto selection will gradually get smarter, choosing models based on the precise task context (code generation, review, chat, etc.). This feature was already available in preview in JetBrains and GA in VS Code.
🔗 Copilot auto model selection GA in JetBrains IDEs
GitHub Copilot CLI: session history in SQLite
March 11 — GitHub Copilot CLI now includes a local SQLite database to remember the history of your terminal sessions. Concretely, if you solved a problem a few days ago on the command line, Copilot CLI can remind you of the solution — without digging through shell history or your notes. This feature is part of phase 2 of GitHub Copilot CLI’s general availability, accessible via gh copilot.
Perplexity Computer opens to Pro subscribers
March 12 — Perplexity Computer, the agent capable of executing complex multi-step workflows across the web, files and connected tools, is now available to Pro subscribers.
Previously restricted to Max subscribers and Enterprise customers, Perplexity Computer provides access to over 20 advanced models, prebuilt and customizable skills, and hundreds of connectors. Max subscribers retain advantages in monthly credits and higher spend limits.
Perplexity Computer for Enterprise: $1.6M saved in four weeks
March 12 — Perplexity published a dedicated post detailing the deployment of Perplexity Computer for Enterprise, available now to Enterprise customers.
Computer for Enterprise integrates with tools companies already use: Salesforce, Microsoft Teams, HubSpot, MySQL, GitHub, and over 400 others via connectors. It routes each task to the most appropriate model among about twenty, and lets teams define skills tailored to their internal processes.
| Team | Use case |
|---|---|
| Finance | Due diligence tracking for M&A, document analysis and risk reporting |
| Legal | Supplier agreement review, version comparison, contract redlining |
| Marketing | Campaign creation (creative, social posts, landing pages) + performance dashboard |
Perplexity shared figures from an internal study over more than 16,000 requests: Computer saved $1.6 million in labor costs and completed the equivalent of 3.25 years of work in four weeks. The solution is SOC 2 Type II certified, with SAML SSO and isolated execution for each task.
🔗 Perplexity Computer for Enterprise
ElevenLabs Flows: a canvas to unify image, video, audio
March 11 — ElevenLabs introduced Flows, a node-based editor integrated into ElevenCreative. On a single canvas, creators can chain and combine image generation, video, Text to Speech, lip-sync, music and sound effects.
This approach is reminiscent of ComfyUI workflows, but applied to ElevenLabs’ multimodal ecosystem, with the studio’s audio and video models accessible in one place.
🔗 Introducing Flows in ElevenCreative
ElevenLabs Music Finetunes: stylistic consistency for music generation
March 12 — ElevenLabs launched Music Finetunes in ElevenCreative. This feature lets creators generate individual voices, instruments or full tracks while maintaining stylistic consistency, thanks to a fine-tuned version of ElevenLabs’ music model.
🔗 Introducing Music Finetunes in ElevenCreative
BFL FLUX.2 [klein] 9B: image editing 2× faster
March 12 — Black Forest Labs (BFL) announced a significant update to its FLUX.2 [klein] 9B model: image editing is now 2× faster, especially when multiple reference images are used.
| Detail | Value |
|---|---|
| Model | FLUX.2 [klein] 9B |
| Improvement | 2× faster for editing |
| Strengthened use case | Multiple references |
| Price | Unchanged |
| Weight | HuggingFace (black-forest-labs/FLUX.2-klein-9b-kv) |
The upgrade is automatic and free for existing FLUX.2 [klein] 9B users via the API. Users of the [klein] 4B model can access the upgraded 9B version via a new preview endpoint.
Mistral AI Now Summit: Paris, May 28, 2026
March 12 — Mistral AI announces its first flagship event: the “AI Now Summit”, a day dedicated to enterprise AI transformation, scheduled for May 28, 2026 in Paris.
| Theme | Description |
|---|---|
| Enterprise open source | Open source as a foundation for end-to-end AI transformations |
| Production deployment | Moving from pilots to large-scale deployments |
| AI infrastructure | Building enterprise-grade infrastructure |
| Innovations 2026 | Robotics, vision-language models (VLMs), multimodal AI |
The event will bring together leaders from around the world. Registration is not yet open; a waitlist is available at ainowsummit.com.
What this means
March 12 illustrates two converging trends. On one hand, generalist AI assistants — Claude, Perplexity, Google Maps — are gaining capabilities that reduce friction between a question and an actionable answer: no need to write code to see a chart, no need to rephrase to find a restaurant. On the other hand, developers are getting new programmatic building blocks: OpenAI’s Video API opens up video generation to automated workflows, and Claude Code continues to be refined for enterprise environments (Bedrock, SSL proxies, multi-platform Windows/macOS).
The Ramp AI Index report confirms that this movement is reflected in real purchases: with ~70% market share among new companies, Anthropic is no longer just an alternative to OpenAI — it has become the default entry point. Competition is now about the quality of integrations and reliability in production, not just model power.
Sources
- Claude now creates interactive charts, diagrams and visualizations
- @claudeai on X
- Claude Code CHANGELOG
- Ramp AI Index — @arakharazian on X
- OpenAI Video API documentation
- @OpenAIDevs on X
- Ask Maps and Immersive Navigation — Google Maps Blog
- Copilot auto model selection GA in JetBrains IDEs
- @github on X — Copilot CLI SQLite
- @perplexity_ai on X — Computer for Pro
- Perplexity Computer for Enterprise
- Introducing Flows in ElevenCreative
- Introducing Music Finetunes in ElevenCreative
- @bfl_ml on X — FLUX.2 klein 9B
- AI Now Summit Mistral — @MistralAI on X
This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator