Search

Claude integrates Figma/Canva on mobile and Claude Code starts 2.8x faster, GitHub Copilot changes its data policy

Claude integrates Figma/Canva on mobile and Claude Code starts 2.8x faster, GitHub Copilot changes its data policy

March 25 and 26 bring several major announcements from Anthropic: Claude integrates Figma, Canva and Amplitude as work tools directly from mobile, Claude Code’s startup is accelerated 2.8x thanks to 60 days of Bun optimizations. GitHub Copilot updates its interaction data usage policy with an opt-out to activate before April 24. Google DeepMind launches Lyria 3 Pro with music generation extended to 3 minutes, and NVIDIA founds the Nemotron Coalition at GTC 2026.


Claude: work tools on mobile (Figma, Canva, Amplitude)

March 25 — Claude now integrates three work tools accessible directly from the mobile app: Figma, Canva, and Amplitude. Users can manage design, creation and analytics workflows from their phones without leaving the Claude interface.

The announcement on X, accompanied by a demo video, gathered 1.9 million views in a few hours — a sign that integrating professional tools into the mobile interface addresses a strong demand.

🔗 Announcement @claudeai on X


Claude Code: 2.8x faster startup, Agent SDK 5.1x

March 24 — After 60 days of optimizations led by Jarred Sumner (creator of Bun, acquired by Anthropic in December 2025), Claude Code now starts 2.8x faster, and the Agent SDK shows 5.1x improved performance.

These gains result from low-level work on the Bun runtime integrated into Claude Code — startup optimizations, memory management and initial response time improvements. Reposted by Brian Cherny (@bcherny, Claude Code team), this technical thread concretely illustrates the benefits of integrating Bun at the heart of Claude tooling.

🔗 Thread @jarredsumner on X


GitHub Copilot: opt-out for interaction data before April 24

March 25 — Mario Rodriguez, GitHub’s product director, announces an update to Copilot’s interaction data usage policy. Starting April 24, 2026, GitHub will use interaction data from Copilot Free, Pro, and Pro+ users — inputs, outputs, code snippets and associated context — to train and improve its AI models.

Affected users can disable this sharing via their account settings before that date. Copilot Business and Copilot Enterprise subscribers are not affected by this change — their data remains governed by existing contractual agreements that exclude training.

PlanAffected by this changeOpt-out available
Copilot FreeYesYes
Copilot ProYesYes
Copilot Pro+YesYes
Copilot BusinessNo
Copilot EnterpriseNo

The cutoff is April 24: users who have not disabled sharing before that date will start contributing to training data.

🔗 GitHub Copilot data policy update 🔗 Announcement on X


Lyria 3 Pro: music tracks up to 3 minutes with musical architecture

March 25 — Google DeepMind launches Lyria 3 Pro, an advanced version of the Lyria 3 music generation model released the previous month. The main novelty: the ability to generate tracks up to 3 minutes, versus 30 seconds previously — a 6x increase.

The model now includes musical architecture — intros, verses, choruses, bridges — enabling production of pieces with coherent transitions and elaborate structures. Prompts accept text, images or video as starting points.

PlatformAccessStatus
Gemini appPaid subscribersAvailable
Google AI Studio + Gemini APIDevelopersAvailable with Lyria RealTime
Vertex AIEnterprisesPublic preview
Google VidsWorkspace + AI Pro/UltraRolling out week of March 25
ProducerAIFree and paidAgentic experience for musicians

Lyria 3 Pro was used by producer Yung Spielburg for the Google DeepMind short film “Dear Upstairs Neighbors”, and a collaboration is announced with DJ François K for an upcoming track.

🔗 Official Google blog — Lyria 3 Pro 🔗 Announcement on X


NVIDIA Nemotron Coalition: frontier-level open foundation models

March 25 — At NVIDIA GTC 2026, Jensen Huang announced the creation of the NVIDIA Nemotron Coalition, a global coalition of model builders dedicated to developing frontier-level open foundation models.

The coalition’s first concrete project will be a base model co-developed by Mistral AI and NVIDIA, with members contributing data, evaluations and domain expertise. Panelists at the GTC session included Mira Murati (Thinking Machines Lab), Aravind Srinivas (Perplexity), Arthur Mensch (Mistral), Robin Rombach (Black Forest Labs), as well as representatives from Cursor and Ai2.

Some figures highlighted at the event:

  • NVIDIA is now the largest organization on Hugging Face, with about 4,000 members
  • Nemotron models have been downloaded more than 45 million times

“Proprietary versus open is not a thing. It’s proprietary and open.” — Jensen Huang, NVIDIA GTC 2026

🔗 NVIDIA blog — The Future of AI Is Open and Proprietary


OpenAI: behind the Model Spec and its evaluations

March 25 — OpenAI publishes a detailed explanation of its approach to the Model Spec — the public framework that defines how its models are expected to behave. The post, authored by Jason Wolfe (OpenAI researcher), explains the philosophy, structure and mechanisms for evolution.

The Model Spec is organized around several distinct levels:

LevelDescription
Public intentions and commitmentsHigh-level objectives: iterative deployment, avoiding severe harms
Chain of CommandRules for resolving conflicts between OpenAI instructions, developers and users
Hard rulesNon-negotiable limits: catastrophic risks, physical harm, illegal content
Default behaviorsStarting points configurable by developers and users
Interpretation aidsDecision rules and examples for ambiguous cases

The central distinction: hard rules are non-bypassable (root level), while default behaviors are configurable — allowing maximized user freedom within safety limits.

OpenAI argues that a sufficiently advanced AI cannot simply infer correct behavior from broad objectives like “be helpful and safe”: such formulations depend heavily on context and require value trade-offs. The Model Spec serves as an internal target, a transparency tool, and a coordination mechanism across teams.

In parallel, OpenAI publishes the Model Spec Evals — a suite of scenario evaluations that attempt to cover the Model Spec’s claims with representative examples, to measure gaps between a model’s real behavior and the specification.

🔗 Inside our approach to the Model Spec 🔗 Announcement on X


Genspark Realtime Voice: hands-free voice assistant

March 25 — Genspark launches Realtime Voice, a fully hands-free real-time voice assistant. The highlighted use case at launch: in-car use during commutes.

Announced features include calendar checking, sending emails and messages, information search and creating commute playlists. The assistant can also generate slides, perform deep research and analyze data — all by voice.

🔗 Announcement on X


OpenAI Safety Bug Bounty: reporting AI risks

March 25 — OpenAI launches an AI safety-focused bug bounty program, separate from its existing traditional security program. This new program accepts reports of abuse risks that do not necessarily correspond to conventional software vulnerabilities.

In-scope categoryExamples
Agentic risks (including MCP)Third-party prompt injection, agent exfiltration, unauthorized large-scale actions
OpenAI proprietary informationGeneration revealing proprietary chain-of-thought information
Account and platform integrityBypassing anti-automation controls, manipulating trust signals

Classic jailbreaks remain out of scope. OpenAI also notes it runs occasional private campaigns on specific harm types — for example bio-risk content in ChatGPT Agent and GPT-5.

🔗 Introducing the OpenAI Safety Bug Bounty program


Codex Creator Challenge: student contest with $10,000 in credits

March 25 — OpenAI and Handshake launch the Codex Creator Challenge, a student contest to build real projects with Codex. Prize: $10,000 in OpenAI API credits. The challenge invites students to experiment with Codex tools and build concrete applications.

🔗 Tweet @OpenAIDevs


ElevenLabs: Guardrails 2.0 in ElevenAgents, AIUC-1 certified

March 24 — ElevenLabs rolls out Guardrails 2.0 in its ElevenAgents platform. This safety layer allows control over agent behavior in production with custom business policies or pre-configured protections: on-topic constraints, on-brand consistency, and resistance to manipulation.

Guardrails 2.0 is certified AIUC-1 (AI Use Case standard) and includes data protection features, conversation history redaction, and post-deployment monitoring.

🔗 Announcement on X


PrismAudio: open-source video-to-audio model from Tongyi Lab (Alibaba)

March 24 — Alibaba’s Tongyi Lab releases PrismAudio, a Video-to-Audio model. Unlike typical V2A approaches that optimize the whole with a single loss function, PrismAudio adopts a “Decomposed Multi-CoT” architecture with three separate specialized heads — each dedicated to a distinct aspect of audio generation.

The model is trained with Multi-Dimensional RL and Fast-GRPO to align audio generation with human preferences. Resources (model, demo, arXiv paper) are released via HuggingFace, ModelScope and a dedicated project page.

🔗 Announcement on X


Claude Code v2.1.83: centralized Enterprise settings management

March 25 — Claude Code version 2.1.83 introduces the drop-in directory managed-settings.d/, intended for Enterprise administrators. This directory allows organizations to centrally manage Claude Code settings by placing configuration files that apply to all workstations in the deployment.

🔗 CHANGELOG Claude Code


Kling AI Team Plan: collaboration up to 15 members

March 24 — Kling AI launches the Team Plan, a collaborative plan allowing teams of up to 15 members to work together on the platform. The plan includes shared assets and commercial use of creations.

🔗 Announcement @Kling_ai on X


Manus Desktop: credits halved until March 30

March 24 — Manus offers a temporary promotion on its Desktop app: from March 24 to 30, 2026, each task consumes 50% fewer credits. Existing credits therefore count double for all tasks performed via Manus Desktop during this period.

🔗 Announcement on X


NVIDIA donates its DRA GPU driver to the Kubernetes community

March 24 — On the sidelines of NVIDIA GTC 2026, NVIDIA announces the donation of its Dynamic Resource Allocation (DRA) GPU driver to the Kubernetes community. This contribution aligns with NVIDIA’s open source commitments and eases dynamic GPU resource allocation in Kubernetes environments.

🔗 NVIDIA blog — AI Future Open and Proprietary


What this means

Anthropic’s two announcements in this period are complementary: the mobile integration of Figma, Canva and Amplitude brings Claude closer to everyday professional workflows, while startup gains for Claude Code (2.8x) and the Agent SDK (5.1x) tangibly improve the developer experience for continuous use. These two axes — consumer mobile usage and developer performance — illustrate Anthropic’s dual trajectory.

GitHub Copilot’s decision on interaction data is the most impactful for individual developers: for the first time, public-tier users (Free, Pro, Pro+) may have their interactions used for training. The opt-out exists but must be activated before April 24 — a short deadline worth noting.

NVIDIA’s Nemotron Coalition is a strong signal about the industry’s open source strategy. By bringing Mistral, Black Forest Labs, Cursor and others together in a formal coalition to develop frontier-level models, NVIDIA positions open source not as an alternative to proprietary models but as a complementary category. Jensen Huang’s phrase — “proprietary and open”, not “versus” — summarizes this repositioning. Lyria 3 Pro confirms that music generation is moving toward production-usable formats: 3 minutes with a coherent structure (intros, verses, choruses) changes the nature of the use case, shifting from experimental snippets to full tracks. Availability on multiple Google platforms in the week of launch accelerates adoption.

The publication of the Model Spec Evals by OpenAI — a suite of evaluations measuring discrepancies between model behavior and the written specification — is notable: it’s one of the first public attempts to objectively measure a model’s alignment with its own rules of conduct.


Sources

This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator