Three announcements dominate the end of the month: OpenAI closes the largest private tech funding round in history with 852 billion valuation, Qwen reaches a milestone with a native omnimodal model capable of seeing, hearing and coding simultaneously, and the head of Claude Code posts a viral thread revealing 15 little-known features of the tool. The week also saw the launch of Perplexity’s Secure Intelligence Institute, new GitHub Copilot tools, and infrastructure initiatives from Runway and NVIDIA.
OpenAI raises $122 billion
March 31 — OpenAI announced the closing of its latest financing round with 852 billion. It’s one of the largest private financing rounds in tech history.
The round is co-led by SoftBank and a16z, with strategic participation from Amazon, NVIDIA and Microsoft. For the first time, OpenAI extended participation to individual investors through banks, raising over $3 billion from individuals. ARK Invest will also include OpenAI in several listed index funds (Exchange-Traded Funds / ETF).
Supporting the raise, OpenAI published growth metrics:
| Indicator | Value |
|---|---|
| Weekly active ChatGPT users | 900 million |
| ChatGPT paid subscribers | 50 million |
| Monthly revenue | $2 billion |
| Tokens processed by the API (per minute) | 15 billion |
| Weekly Codex users | 2 million (+5× in 3 months) |
| Month-over-month Codex growth | +70% |
The company outlines a roadmap centered on an “AI superapp”: a unified interface combining ChatGPT, Codex, web search and AI agents. The stated goal is to exceed one billion weekly active users. Enterprises already account for 40% of revenue.
GPT-5.4, OpenAI’s latest model, is described as delivering gains in reasoning, coding and agentic workflows. OpenAI’s growth is presented as four times faster than Google and Meta at comparable stages.
🔗 Official OpenAI announcement
Qwen3.5-Omni: native omnimodal model
March 29 — Alibaba Qwen launched Qwen3.5-Omni, a model natively designed to process text, images, audio and video in a single unified model. Unlike classic multimodal approaches that add modalities in layers, this model processes those inputs simultaneously.
Raw capabilities are significant: up to 10 hours of audio or 400 seconds of 720p video natively, trained on over 100 million hours of data, speech recognition in 113 languages and expression in 36 languages.
Flagship feature: Audio-Visual Vibe Coding
The most directly usable feature is “Audio-Visual Vibe Coding”: the user describes their project aloud in front of a camera, and Qwen3.5-Omni-Plus generates a functional website or game. It’s an application of the vibe coding concept extended to audio and video in real time.
Comparative performance
| Category | Qwen3.5-Omni-Plus | Gemini 3.1 Pro |
|---|---|---|
| DailyOmni (audio/vision) | 84.6 | 82.7 |
| WorldScene | 62.8 | 65.5 |
| QualocommInteractive | 68.5 | 52.3 |
| OmniClear | 64.8 | 55.5 |
| IFEval (text) | 89.7 | 93.5 |
| MMLU-Redux | 94.2 | 90.0 |
The model outperforms Gemini 3.1 Pro on audio benchmarks and is comparable on audio-visual understanding.
Voice capabilities
- Fine-grained voice control: adjust emotion, pace and volume in real time
- Voice Cloning from a short sample (engineering deployment announced soon)
- Semantic Interruption that understands actual intent and ignores ambient noise
- Integrated web search and complex function calls
Model family
| Variant | Positioning |
|---|---|
| Qwen3.5-Omni-Plus | SOTA performance, detailed audio-visual captioning |
| Qwen3.5-Omni-Plus-Realtime | Voice Control, WebSearch, Voice Clone, Semantic Interruption |
| Qwen3.5-Omni-Flash | Speed |
| Qwen3.5-Omni-Light | Lightweight |
Access via chat.qwen.ai (VoiceChat/VideoChat button) and the Alibaba Cloud API.
Note: Qwen 3.6 Plus Preview is available for free on OpenRouter for a limited time — exchanges are collected during this period to improve the model.
15 hidden features of Claude Code
March 30 — Boris Cherny, head of Claude Code at Anthropic, posted a thread revealing 15 little-documented features of the tool. The thread reached 3.6 million views, 2,000 reposts and 22,000 likes.
“I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I’ll focus on the ones I use the most. Here goes.” — @bcherny on X
Mobility and remote sessions
- The Claude app on iOS and Android includes a Code tab allowing coding from your phone
--teleport(or/teleport) lets you switch a cloud session to a local machine;/remote-controllets you control a local session from any device- Cowork Dispatch: secure remote control of the Claude Desktop App from mobile, with access to MCP (Model Context Protocol) servers, the browser, etc.
Automation
/loopand/scheduleallow launching Claude automatically at set intervals, up to a week — Cherny uses/loop 5m /babysitfor continuous automated code review and rebase- Hooks (
SessionStart,PreToolUse, etc.) allow injecting deterministic logic into the agent cycle, for example to route permission requests to WhatsApp
Parallelization
/batchdistributes work to dozens, hundreds or even thousands of agents in parallel — useful for large-scale code migrationsclaude -wstarts parallel sessions in separate git worktrees
Daily productivity
/btwlets you ask a quick question while an agent is working, without interrupting the current task/branchallows forking a session; or via CLI:claude --resume <session-id> --fork-session--agentlets you define custom agents in.claude/agents/with a prompt system and configurable tools--add-dir//add-dirgives Claude access to multiple folders or repos simultaneously--barespeeds up SDK startup by up to 10× (avoids loading CLAUDE.md, settings and MCP servers)/voiceenables voice input (spacebar in CLI, dedicated button on Desktop, iOS dictation)- Chrome extension (beta): Claude Code + Chrome to test web apps, debug console logs and automate the browser
Claude Code: auto mode extended to Enterprise and API
March 30 — Claude Code’s auto mode, launched March 24 for Pro and Max users, is now available on the Enterprise plan and for developers accessing the API. This feature allows Claude to make its own approval decisions for actions (writing files, running bash commands) instead of prompting the user at every step.
To enable it in an Enterprise or API environment:
claude --enable-auto-mode
Auto mode relies on internal classifiers that assess the risk of each action before executing it, balancing between permissive mode (--dangerously-skip-permissions) and manual approvals.
March 30 — Cowork Dispatch can now start coding tasks with a specific model, mentioned directly in natural language in the instruction.
Perplexity launches the Secure Intelligence Institute
March 31 — Perplexity launched the Secure Intelligence Institute (SII), a research lab dedicated to the security, privacy and safety of advanced AI systems. The Institute is led by Dr. Ninghui Li — Samuel D. Conte Professor at Purdue University, ACM and IEEE Fellow, former chair of ACM SIGSAC — with academic partnerships including Dan Boneh’s applied cryptography group and Neil Gong’s Gong Lab.
The SII published three initial works:
| Publication | Type | Description |
|---|---|---|
| BrowseSafe | Open-source benchmark | 14,700+ real attack scenarios, 14 risk categories for AI browsing |
| Securing Agents NIST/CAISI | Policy | Response to the RFI (Request for Information) on securing autonomous agents |
| Building Security Into Comet | Architecture | Defense-in-depth for the Comet AI browser |
The SII translates its research into concrete improvements for Perplexity systems and shares its work with the AI ecosystem.
🔗 Secure Intelligence Institute
Cohere and Ensemble: LLM specialized in healthcare revenue cycle management
March 31 — Cohere and Ensemble announced the construction of the first industry-native large language model (LLM) specialized in Revenue Cycle Management (RCM) for U.S. healthcare.
Ensemble offers an end-to-end solution for hospitals and medical groups, from appointment scheduling to final billing. Unlike competitors that wrap general LLMs in specialized prompts, this model is fully customized on Cohere’s Command family.
| Domain | Capability |
|---|---|
| Financial | Denial prediction before submission, continuous billing quality control |
| Clinical | Point-of-care documentation guidance, assembly of appeal dossiers |
| Agentic | Multi-step orchestration of the revenue cycle |
The model was trained on Cohere’s pretraining data, Ensemble’s operational logs, public RCM knowledge sources and expert annotations. A domain-specific benchmark co-developed with partners will measure performance against general LLMs on real RCM tasks.
GitHub Copilot: agent-first development and Slack integration
March 31 — Tyler McGoffin, senior researcher on GitHub’s Copilot Applied Science team, shared a write-up on building an internal tool with Copilot as the primary coding agent. The tool automates analysis of agent trajectories on benchmarks like TerminalBench2 and SWEBench-Pro.
Practices described: using /plan before coding, creating “contract tests” that only a human can modify, detailed prompts instead of terse ones, and weekly automated maintenance via /plan Review the code for any missing tests.... The conclusion: the qualities of a good engineer (planning, context, communication) are the same when collaborating effectively with an AI agent.
March 30 — The GitHub app for Slack now integrates Copilot to create GitHub issues directly from Slack using natural language. Just mention @GitHub in any channel and describe the work.
| Feature | Detail |
|---|---|
| Natural language creation | Description → structured issues (title, body, assignees, labels, milestones) |
| Sub-issues | Break work into parent/child issues from a single message |
| Conversation mode | Iterate on issues before creating them |
March 31 — GitHub presented the Copilot SDK enabling agentic workflows in third-party applications according to three architectural patterns.
🔗 GitHub Blog - Agent-driven development 🔗 GitHub Changelog - Create issues from Slack
Runway: investment fund and startup program
March 31 — Runway launched two simultaneous initiatives.
The Runway Fund is an investment fund for early-stage startups in AI, media and world simulation. Initial commitment up to 500,000 in pre-seed/seed. Focus areas: AI research (world models and generative AI), new applications (application layer on LLMs), and new media and content. Investments have already been made in Cartesia, LanceDB and Tamarind Bio.
Runway Builders is an accelerator program for startups from seed to Series C building products with generative video and real-time conversational AI. Participants receive API credits, the highest rate limits and access to a private community.
🔗 Runway Fund 🔗 Runway Builders
NVIDIA and Emerald AI: flexible AI factories on the power grid
March 31 — NVIDIA and Emerald AI presented at CERAWeek a new approach for AI factories: treating them as flexible assets on the power grid rather than static loads. The architecture is built on NVIDIA Vera Rubin DSX and Emerald AI’s Conductor platform.
Announced energy partners: AES, Constellation, Invenergy, NextEra Energy, Nscale Energy and Vistra. Related announcements:
- Maximo: 100 MW robotic solar AI installation operational at Bellefield with NVIDIA Isaac Sim
- TerraPower + SoftServe: NVIDIA Omniverse digital twin to reduce Natrium nuclear plant design lead times
- Adaptive Construction Solutions: national training program for AI factory construction
- GE Vernova, Schneider Electric, Vertiv: validated reference designs for Vera Rubin
Jensen Huang described energy as the foundational layer of a “five-layer AI cake.”
In brief
Gemini Live on Gemini 3.1 Flash Live — March 30 — Google confirmed the deployment of the Gemini 3.1 Flash Live model in the Gemini Live app, available to all users. This transition (announced March 26) brings more natural audio conversations and improved accuracy in noisy environments. 🔗 Tweet @GeminiApp
Manus: phone control for Desktop — March 30 — Manus adds the ability to control the Desktop application from your smartphone: start tasks, access files, and launch workflows without touching the computer. 🔗 Tweet @ManusAI
Midjourney V8 teaser — March 29 — David Holz (founder of Midjourney) announces a “radically different” version of V8, “arriving very soon”. No date announced. 🔗 Tweet @DavidSHolz
Claude Code v2.1.87 — Fixed a bug in Cowork Dispatch where messages were not being delivered. 🔗 CHANGELOG GitHub
What this means
OpenAI’s fundraising at a $852 billion valuation marks an inflection point: at these numbers, the gap between leading players and the rest of the industry widens structurally. With 900 million weekly users and a target of one billion, ChatGPT is establishing itself as mass infrastructure, not just a technology product.
The launch of Qwen3.5-Omni illustrates the growing competition around omnimodal models. Audio-Visual Vibe Coding represents a concrete evolution of intention-based coding (vibe coding) — moving from text to voice and video as the primary interface to generative AI.
On the developer tools side, Boris Cherny’s thread reveals that Claude Code has accumulated advanced features (massive parallelization with /batch, automation via hooks, distributed sessions) that remained little known due to lack of visible documentation. The extension of auto mode to Enterprise plans follows a classic trajectory: validation in preview, then gradual rollout.
Finally, Perplexity’s creation of the Secure Intelligence Institute and Cohere’s initiatives in healthcare signal a trend: second-tier players are looking to differentiate themselves in specialized verticals (AI security, regulated healthcare) rather than compete head-on on general-purpose models.
Sources
- OpenAI - Accelerating the Next Phase of AI
- Tweet OpenAI - Funding round
- Tweet Alibaba Qwen - Qwen3.5-Omni
- Thread Boris Cherny - 15 Claude Code features
- Tweet @claudeai - Auto mode Enterprise
- Tweet @noahzweben - Dispatch model
- CHANGELOG Claude Code GitHub
- Perplexity Secure Intelligence Institute
- Tweet Perplexity - SII
- Cohere blog - LLM RCM for healthcare
- GitHub Blog - Agent-driven development in Copilot applied science
- GitHub Changelog - Create issues from Slack with Copilot
- Runway Fund
- Runway Builders
- NVIDIA Blog - AI Factories
- Tweet @GeminiApp - Gemini Live 3.1 Flash
- Tweet @ManusAI - Phone control
- Tweet @DavidSHolz - Midjourney V8
- Tweet @OpenRouter - Qwen 3.6 Plus Preview
This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, consult https://gitlab.com/jls42/ai-powered-markdown-translator