Search

RSP v3.0 at Anthropic, GPT-5.3-Codex available to all, Meta signs 6 GW of GPUs with AMD

RSP v3.0 at Anthropic, GPT-5.3-Codex available to all, Meta signs 6 GW of GPUs with AMD

Anthropic undertakes a deep revision of its safety policy with RSP v3.0, which introduces a public Frontier Safety Roadmap and quarterly Risk Reports with external review. OpenAI ends the limited-access period for GPT-5.3-Codex, which is now available to all developers via the Responses API. Meta signs a multi-year agreement with AMD for roughly 6 GW of GPU capacity dedicated to its AI models. Qwen launches four MoE models including a Qwen3.5-35B-A3B that outperforms its own 235B model. Claude Code introduces Remote Control to continue a session from your phone.


Anthropic: Responsible Scaling Policy v3.0

February 24 — Anthropic publishes the third version of its Responsible Scaling Policy (RSP), the voluntary framework governing catastrophic risks related to its models.

The original RSP dates from September 2023. In two and a half years, models gained new capabilities — web browsing, code execution, use of computers, multi-step autonomous actions — and each new capability brought new risks to address.

What worked

The RSP pushed Anthropic to develop stronger safeguards, such as classifiers to block content related to biological weapons required by ASL-3. The ASL-3 standard was activated in May 2025 and is operational. OpenAI and Google DeepMind adopted similar frameworks in the months following the initial announcement. The RSPs also helped inform legislation (SB 53 in California, EU AI Act).

What didn’t work

Capacity thresholds proved more ambiguous than expected — deciding whether a model has “definitively” crossed a threshold remains difficult in practice. Governments did not move as quickly as hoped, in a political context unfavorable to regulation. Some high ASL requirements (ASL-4, ASL-5) may be impossible to satisfy unilaterally.

Three key changes in RSP v3.0

v3.0 now distinguishes what Anthropic commits to doing independently of other actors, on one hand, and a mapping of capabilities→mitigations that the entire industry should adopt, on the other.

A second document published in parallel — the Frontier Safety Roadmap — sets concrete public goals: launch “moonshot” R&D projects to secure model weights, develop automated red-teaming that outperforms hundreds of human participants, and establish centralized registries of all critical AI development activities.

Finally, Anthropic commits to publishing Risk Reports every 3 to 6 months: security profiles of models, articulation of capabilities/threats/mitigations, and for the most advanced models, an independent expert review with uncensored access to the report.

“We’re updating our Responsible Scaling Policy to its third version. Since it came into effect in 2023, we’ve learned a lot about the RSP’s benefits and its shortcomings. This update improves the policy, reinforcing what worked and committing us to even greater transparency.” — @AnthropicAI on X

🔗 RSP v3.0 (Anthropic)


OpenAI: GPT-5.3-Codex general availability

February 24 — GPT-5.3-Codex is now accessible to all developers via OpenAI’s Responses API. The model was launched in early February with limited access; it now moves to general availability.

GPT-5.3-Codex combines frontier coding performance and professional-knowledge capabilities in a single model. According to early integrator feedback, it is “significantly more powerful and 3 to 4 times more token-efficient than GPT-5.2”. The platform Lovable was among the first adopters for its most complex use cases.

The model is also available via OpenRouter for developers who want to integrate it into their workflows without going through the OpenAI API directly.

ItemDetail
AvailabilityResponses API (general access)
Efficiency3–4x more token-efficient vs GPT-5.2
Alternative accessOpenRouter

🔗 Tweet @OpenAIDevs


Meta + AMD: multi-year deal for ~6 GW of GPUs

February 24 — Meta announces a multi-year partnership with AMD to integrate the latest AMD Instinct GPUs into its global infrastructure.

The deployment plans approximately 6 GW of data center capacity dedicated to developing cutting-edge AI models and delivering “superintelligent personal AI” to billions of users worldwide.

“Today we’re announcing a multi-year agreement with @AMD to integrate their latest Instinct GPUs into our global infrastructure. With approximately 6GW of planned data center capacity dedicated to this deployment, we’re scaling our compute capacity to accelerate the development of cutting-edge AI models and deliver personal superintelligence to billions around the world.” — @AIatMeta on X

This deal marks a significant diversification of GPU suppliers for Meta, which had relied primarily on NVIDIA until now. A contract of this scale — 6 GW, a substantial infrastructure commitment for a single partnership — is a strong signal of Meta’s compute ambitions for next-generation models.

🔗 Tweet @AIatMeta


Qwen 3.5 Medium Series: 4 MoE models, “More intelligence, less compute”

February 24 — Alibaba Qwen launches the Qwen 3.5 Medium series, comprising four Mixture-of-Experts (MoE) models.

The most striking result is the Qwen3.5-35B-A3B: with only 3B active parameters (out of 35B total), it outperforms Qwen3-235B-A22B — the previous giant of the family. The MoE architecture and large-scale RL training enable this efficiency density.

ModelActive parametersNote
Qwen3.5-Flash1M token context, integrated tools, hosted
Qwen3.5-35B-A3B3B active / 35B totalOutperforms Qwen3-235B-A22B
Qwen3.5-122B-A10B10B active / 122B total
Qwen3.5-27B27B

Qwen3.5-Flash is the hosted version of the series, with a one-million-token context by default and integrated tools. The models are available on HuggingFace, ModelScope and Qwen Chat.

🔗 Tweet @Alibaba_Qwen


Claude Code v2.1.51: Remote Control from mobile

February 24 — Claude Code v2.1.51 introduces remote-control, the most anticipated feature of the release: continuing a local session from your phone.

A Claude Code session started from the terminal can be resumed on the Claude Code mobile app via /remote-control. The feature is available to Max users in research preview.

Beyond Remote Control, this release brings several technical improvements:

ChangeDetail
Plugin marketplace timeoutGit: 30s → 120s, configurable via CLAUDE_CODE_PLUGIN_GIT_TIMEOUT_MS
npm registriesSupport for custom registries and version pinning for plugins
BashToolSkip login shell by default when snapshot is available
Security hooksFix: statusLine and fileSuggestion hooks without workspace trust acceptance
Context reductionTool results > 50K chars persisted to disk (was 100K)

🔗 Claude Code CHANGELOG


Claude Cowork: private plugin marketplace and cross-app orchestration

February 24 — Anthropic ships a major update to Cowork with private plugin marketplaces for enterprises, new connectors and cross-application orchestration.

Admins can now create private plugin marketplaces for their organization: creation from templates or from scratch, with Claude guiding the configuration. A new unified “Customize” menu centralizes plugins, skills and connectors (MCP). Admins also have per-user provisioning, auto-install, and plugin sources from private GitHub repos (in private beta).

On the connectors side, the list expands with Google Workspace (Calendar, Drive, Gmail), Docusign, Apollo, Clay, Outreach, Similarweb, MSCI, LegalZoom, FactSet, WordPress and Harvey. Companies such as Slack, LSEG, S&P Global and Tribe AI have also published plugins.

New departmental plugin templates are available:

DepartmentExample workflows
HROffer letters, onboarding, performance reviews
DesignUX critiques, accessibility audits, user research plans
EngineeringStandups, incident response, deploy checklists, postmortems
OperationsProcess documentation, vendor assessment
FinanceMarket analysis, financial modeling, PowerPoint slides
Investment BankingTransaction documents, comparable analyses
Private EquityDue diligence, scoring by investment criteria

Claude can now also orchestrate tasks between Excel and PowerPoint — analyze data in Excel then generate a presentation in PowerPoint, passing context from one add-in to the other. This feature is available in research preview for all paid plans on Mac and Windows.

Finally, OpenTelemetry support enables admins to monitor usage, costs and tool activity by team.

🔗 Cowork plugins enterprise blog 🔗 Tweet @claudeai


OpenAI: Codex CLI v0.99.0

February 24 — Codex CLI reaches version 0.99.0 with several new features.

The /statusline command now allows customizing the metadata displayed in the TUI footer. GIF and WebP images are accepted as attachments. Executing direct shell commands no longer blocks the current turn — they can run concurrently. Snapshotting of the shell environment and rc configuration files is now enabled.

FeatureDetail
/statuslineConfigure the TUI footer interactively
GIF/WebP imagesNew attachment formats supported
Shell snapshotSnapshot of shell environment and rc files
App-server APIsTurn/steer, feature discovery, resume_agent
Web search controlRestricted modes via requirements.toml (Enterprise)

A security fix (RUSTSEC-2026-0009) is also included.

npm install -g @openai/codex@0.99.0

🔗 Codex Changelog v0.99.0


DeepSeek-V3.2: official release

February 24 — DeepSeek announces the official release of DeepSeek-V3.2, succeeding the experimental V3.2-Exp from November 2025.

According to the banner on deepseek.com, V3.2 strengthens Agent capabilities and integrates reflective reasoning (thinking/reasoning). The release is available on web, mobile app and API. Full technical details are published on WeChat (DeepSeek’s main announcement channel in Chinese).

🔗 deepseek.com


Perplexity and Comet: voice mode for everyone

February 24 — Perplexity rolls out a major update to its voice mode on Perplexity and its Comet browser, available to all users — not just subscribers.

Comet’s new voice mode lets users query the AI about what’s on the screen, navigate between websites by voice, and maintain a coherent conversation across multiple tabs without losing context. This multi-tab context persistence is an advancement over classic voice assistants.

AspectDetail
AvailabilityAll users (not only subscribers)
PlatformsAndroid, Mac, Windows
Key featureMulti-tab voice navigation with persistent context

🔗 Tweet @perplexity_ai 🔗 Tweet @comet


Google DeepMind: Music AI Sandbox × Wyclef Jean

February 24 — Google DeepMind and YouTube unveil a collaboration with producer and artist Wyclef Jean around Music AI Sandbox.

Music AI Sandbox — powered by Lyria 3, the music generation model announced on February 18 — enables professional musicians to experiment with AI as a creative partner. Wyclef Jean used these tools to develop his song “Back from Abu Dhabi”. The creation process is documented in a video available on YouTube.

This partnership is part of a series of artist collaborations initiated by Google DeepMind to explore creative uses of music AI in real studio conditions.

🔗 Tweet @GoogleAI 🔗 Tweet @GoogleDeepMind


Google DeepMind: Robotics Accelerator in Europe

February 24 — Google DeepMind announces the launch of its Robotics Accelerator in Europe, a program dedicated to robotics startups.

The stated objective is to bridge the gap between technology and commercial opportunities, and to accelerate the next generation of physical agents. The program is presented as tailored for startups, with access to Google DeepMind’s resources and expertise.

🔗 Tweet @GoogleDeepMind


NVIDIA + Red Hat: AI Factory for enterprise

February 24 — Red Hat and NVIDIA jointly announce the Red Hat AI Factory with NVIDIA, a combined solution to accelerate AI adoption in the enterprise.

The platform brings together Red Hat AI Enterprise (orchestration and model deployment) with NVIDIA AI Enterprise (GPU-optimized software stack). The goal is to reduce operational complexity and total cost of ownership for organizations deploying AI applications in production.

🔗 Tweet @NVIDIAAI


Black Forest Labs: Safety Evaluation — 10x fewer vulnerabilities

February 24 — Black Forest Labs publishes the results of an independent third-party evaluation of emerging risks for its FLUX models.

The results show more than 10x fewer vulnerabilities than other popular open-weight image models. BFL says that high performance, open innovation and safeguards can go hand in hand — a rare transparency approach in the open-source image model industry.

🔗 Tweet @bfl_ml


In brief

Claude Code v2.1.52 — Targeted patch released shortly after v2.1.51: fix for a crash in the VS Code extension on Windows (command 'claude-vscode.editor.openLast' not found). No new features.

GitHub Copilot SDK — PowerPoint agent — GitHub shares a demo (Feb 23) showing how to build an agent with the Copilot SDK capable of searching the latest docs, analyzing existing slides to reproduce their style, and generating new slides directly in PowerPoint. 🔗 Tweet @github

Runway — Interior Designer — Runway presents a creative use case: transforming a photo of a room into a customized interior design via a combination of Nano Banana Pro, Kling 3.0 and Gen-4.5. A marketing demo illustrating the multi-model platform launched on Feb 20. 🔗 Tweet @runwayml


What this means

Anthropic’s RSP v3.0 marks a turning point in the approach to AI safety: by making its objectives public via the Frontier Safety Roadmap and committing to Risk Reports with external review, Anthropic moves from an internal policy to a mechanism of public accountability. It’s a soft pressure on the whole industry — and an implicit response to criticism about the opacity of deployment decisions.

The Meta+AMD 6 GW deal is a wake-up call for NVIDIA: the only GPU supplier used so far by Meta is starting to face direct competition. For AMD, it’s large-scale validation of its Instinct GPUs that still struggle to assert themselves against the H100/H200 in training workloads.

Qwen 3.5 Medium confirms that the MoE architecture is no longer the prerogative of very large models: a 35B-A3B that outperforms a 235B is a remarkable compression of intelligence, accessible to those who don’t have the infrastructure to run the giants.


Sources

This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator