Search

Claude Code v2.1.63, OpenAI signs with the Pentagon, Anthropic stands firm against the DoW

Claude Code v2.1.63, OpenAI signs with the Pentagon, Anthropic stands firm against the DoW

The end of February 2026 closes with tensions between the major AI labs and the U.S. Department of War: OpenAI signs a classified agreement with the Pentagon with three red lines, while Anthropic refuses the same concessions and threatens legal action. On the technical front, Claude Code v2.1.63 is the most substantial update in weeks, and Claude Memory finally opens up to free users.


Claude Code v2.1.63: /simplify, HTTP hooks and a wave of memory fixes

February 28, 2026 — Version 2.1.63 of Claude Code is the most complete release in weeks. It combines new developer features with a major wave of memory leak fixes — eight in total.

New features

FeatureDescription
/simplify et /batchTwo new slash commands bundled in the default installation
HTTP hooksHooks can now send JSON to an external URL (POST) and receive JSON in return, instead of only executing a shell command
Project configs & auto memoryShared across all git worktrees of the same repository
ENABLE_CLAUDEAI_MCP_SERVERS=falseNew environment variable to disable claude.ai MCP servers
/model improvedNow shows the active model in the slash commands menu
/copy “Always copy full response”Option to copy the full response directly without going through the picker
VSCode — session managementRename and remove actions available in the sessions list
MCP OAuth — URL paste fallbackIf the localhost redirect fails, the callback URL can be pasted manually

The addition of HTTP hooks is particularly notable: it enables integrating Claude Code into external automation workflows (CI, webhooks, notification systems) without going through intermediary shell scripts.

Memory leak fixes (8 fixes)

A wave of 8 fixes targets memory accumulation in long-running sessions:

  • Bridge polling loop, MCP OAuth flow, hooks configuration menu, bash command prefix cache
  • MCP tool/resource cache, IDE host IP detection, WebSocket listener, git root detection
  • Teammate messages released after conversation compaction
  • MCP server fetch caches cleared on disconnect
  • Subagents: reduced progress payloads during compaction

For teams using Claude Code as a multi-agent orchestrator or in long sessions, these fixes should noticeably reduce long-term memory usage.

Other fixes

  • VSCode: remote sessions missing from conversation history — fixed
  • Race condition in the REPL bridge (incoming messages interleaved with history)
  • /clear: expired hidden skills that persisted in new conversations

🔗 Claude Code CHANGELOG


OpenAI × Anthropic vs the Department of War — Two approaches, same refusal

OpenAI signs a classified agreement with the Department of War

February 28, 2026 — OpenAI reached an agreement with the Department of War to deploy AI systems in classified environments. The company says this agreement imposes more safeguards than any previous classified AI deployment.

OpenAI defines three non-negotiable red lines:

Red lineDescription
No mass surveillanceProhibits surveilling U.S. citizens at large scale
No autonomous weaponsClaude cannot direct weapons without human oversight
No high-stakes automated decisionsHuman oversight required for all critical decisions

The deployment architecture is cloud-only (no edge/on-premise), with the OpenAI security stack intact. OpenAI engineers with secret-defense clearance will be present on site. No “guardrail-free” model is provided.

The contract explicitly states that the AI system cannot direct autonomous weapons where law, regulation, or Department policy requires human control — in line with DoD Directive 3000.09 (January 25, 2023).

Position on Anthropic: OpenAI explicitly states that Anthropic should not be designated as a “supply chain risk” and has communicated this position to the government. The company requested that the same contract terms be offered to all AI labs.

🔗 Our agreement with the Department of War

February 27, 2026 — Secretary of War Pete Hegseth announced on X his intention to designate Anthropic as a supply chain risk. Anthropic responded the same day with an official statement, distinct from the statement by Dario Amodei published the day before.

The statement explains the reasons for the impasse: Anthropic had requested two exceptions to the Department of War’s use of Claude — domestic mass surveillance of Americans, and fully autonomous weapons. These two exceptions were not accepted.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.” — @AnthropicAI on X

Anthropic clarifies the real impact for its customers:

SegmentImpact
Individual users and APIAccess to Claude entirely unchanged
DoW contractorsOnly usage within DoW contracts would be affected — not other uses

🔗 Anthropic statement (anthropic.com)


Claude Memory free + import from other AI providers

March 2, 2026 — Claude announces that the Memory feature is now available to all users, including on the free plan.

“Memory is now available on the free plan. We’ve also made it easier to import saved memories into Claude. You can export them whenever you want.” — @claudeai on X

The new import feature allows transferring remembered context from other AI assistants into Claude — ChatGPT, Gemini or others. The interface offers a “Start import” button with the note “Bring relevant context and data from another AI provider to Claude.”

To get started: Settings → Memory.

🔗 Tweet @claudeai


Qwen 3.5 Small Series — four compact open-weight models

March 2, 2026 — Alibaba/Qwen completes its Qwen3.5 lineup with four compact open-weight models: Qwen3.5-0.8B, Qwen3.5-2B, Qwen3.5-4B and Qwen3.5-9B.

These models share the same architecture as the larger Qwen3.5 models — native multimodal, reinforcement learning at scale — in formats adapted to different contexts:

ModelTarget use
Qwen3.5-0.8BEdge devices, ultra-low latency
Qwen3.5-2BEmbedded deployments, lightweight and fast
Qwen3.5-4BMultimodal base for lightweight agents
Qwen3.5-9BCompact but competes with much larger models

Base models are also released to allow fine-tuning by the research community. The announcement amassed 1.6 million views on X on the first day.

🔗 Qwen 3.5 Small Series on Hugging Face
🔗 Tweet @Alibaba_Qwen


Gemini CLI v0.31.0: agent browser, Gemini 3.1 Pro, Policy Engine

February 27, 2026 — Gemini CLI moves to version 0.31.0 with four major additions.

FeatureDescription
Gemini 3.1 Pro PreviewAccess to Google’s latest model directly from the terminal
Experimental agent browserInteracts with web pages without extra configuration
Policy Engine — projectProject-level policies, MCP wildcards, filtering by tool annotations
Direct web fetchExperimental mode with built-in rate limiting

The agent browser paves the way for web automation workflows from the CLI — scraping, testing, interacting with non-API interfaces. It’s experimental, so use with caution in production environments.

🔗 Gemini CLI Changelog


GitHub Copilot: deprecation of Gemini 3 Pro and GPT-5.1 models

March 2, 2026 — GitHub announced the upcoming deprecation of several models across all Copilot experiences (Chat, inline edits, ask/agent modes, code completions).

ModelDeprecation dateSuggested alternative
Gemini 3 ProMarch 26, 2026Gemini 3.1 Pro
GPT-5.1April 1, 2026GPT-5.3-Codex
GPT-5.1-CodexApril 1, 2026GPT-5.3-Codex
GPT-5.1-Codex-MiniApril 1, 2026GPT-5.3-Codex
GPT-5.1-Codex-MaxApril 1, 2026GPT-5.3-Codex

The accelerated deprecation of Gemini 3 Pro on March 26 (rather than a later date) is justified by a deprecation on Google’s side. Copilot Enterprise admins should check their model policies in Copilot settings and enable alternatives before these dates.

Copilot metrics also now include telemetry for Plan mode — available in JetBrains, Eclipse, Xcode and VS Code Insiders, with a general VS Code release planned soon.

🔗 GitHub Changelog — model deprecations
🔗 Copilot metrics — Plan mode


NVIDIA Nemotron LTM 30B and telco Blueprints for MWC Barcelona

February 28, 2026 — Ahead of the Mobile World Congress Barcelona (March 2–5, 2026), NVIDIA announced two AI-related telecom initiatives.

Nemotron LTM 30B is a 30-billion-parameter open-source model developed with AdaptKey AI, based on NVIDIA Nemotron 3 and fine-tuned on open telecom data (industry standards and synthetic logs). It’s the first time NVIDIA has published an open Large Telco Model dedicated to the sector.

Two new NVIDIA Blueprints accompany the launch:

BlueprintPartnerUse
Energy saving RANVIAVI TeraVMEnergy optimization for radio networks
Multi-agent network configurationTech MahindraReasoning like NOC engineers

Early adopters include Cassava Technologies (Africa), NTT DATA (Japan), Telenor Maritime. The models are published via the GSMA Open Telco AI initiative.

The report State of AI in Telecom 2026, published February 27, concludes that network automation is the #1 AI use case for ROI in telecoms.

🔗 NVIDIA Nemotron LTM blog


Runway: new co-CEO and four C-suite appointments

February 26, 2026 — Runway announced leadership changes. Anastasis Germanidis, co-founder of the company, becomes co-CEO alongside Cristóbal Valenzuela.

Four new C-suite appointments:

RolePerson
CTOKamil Sindi
COOMichelle Kwon
CPOAnna Chalon
CCOJamie Umpherson

Runway describes these appointments as formalizing an existing structure, in line with its “world simulation” vision.

🔗 Runway announcement


Briefs

Midjourney: Moodboards and Personalization on Niji V7

February 26 — Midjourney added Moodboards and Personalization to the Niji V7 model (anime/illustration specialization). Web rooms are being phased out in favor of upcoming collaboration tools.

🔗 Tweet @midjourney

Genspark: Nano Banana 2 free + Workspace 3.0 coming

February 27 / March 2 — Nano Banana 2 (an AI image generator) is now available in Genspark’s AI Image Agent, free for all users (unlimited for Plus and Pro subscribers). Additionally, CEO Eric Jing announced Genspark Workspace 3.0 for the week of March 9 — the third major version after Workspace 1.0 (Nov. 2025) and 2.0 (Jan. 2026).

🔗 Tweet Genspark Workspace 3.0

Perplexity hosts “Ask”, its first developer conference

February 27 — Perplexity announced “Ask”, its first developer conference, scheduled for March 11, 2026 in San Francisco. Registration at events.perplexity.ai/ask2026. Context: Perplexity says its APIs reach hundreds of millions of Samsung devices and are used by 6 of the 7 “Magnificent Seven”.

🔗 Tweet @perplexity_ai

Stargate Texas: first steel beams

February 27 — Greg Brockman (@gdb) shared a visual update from the Stargate construction site in Milam County, Texas: the first steel beams have been placed on site, in partnership with SoftBank and SBEnergy.

🔗 Tweet @gdb

ChatGPT: trusted contact for distress situations

February 27 — OpenAI announced a “Trusted Contact” feature for ChatGPT: adult users can designate a contact who will receive notifications if the user appears to need support. Developed with the Council on Well-Being and AI and the Global Physicians Network. A California court has also consolidated several mental-health-related lawsuits involving ChatGPT into a single proceeding.

🔗 OpenAI mental health update

Google AI: Flow redesign, Opal, producer.ai, Gemini K-12

February 27 — Google listed several items in its weekly @GoogleAI roundup: Flow by Google (a video creation tool) received a major redesign aimed at becoming a full AI creative studio. Opal (Google Labs) introduces an “agent step” to turn static workflows into interactive experiences. producer.ai has officially joined Google Labs. Gemini training is now available to the 6 million K-12 and higher-education teachers in the United States. --- title: “Roundup @GoogleAI” description: “OpenAI vs Anthropic, Claude updates, Qwen small series, and industry news.” locale: en


🔗 Roundup @GoogleAI

@OpenAIDevs teaser “Soon.”

March 2 — @OpenAIDevs posted a teaser with a Windows XP image and the single word “Soon.” — 179,000 views. No further details available.

🔗 Teaser @OpenAIDevs


What this means

The OpenAI/Anthropic confrontation with the Department of War reveals a divergence of strategy more than a divergence of values: both labs draw similar red lines (mass surveillance, autonomous weapons), but OpenAI negotiated an agreement with contractual safeguards, whereas Anthropic refused to sign without accepted guarantees. OpenAI’s public stance in support of Anthropic complicates the picture — both labs appear coordinated on principles, even if their commercial trajectories diverge.

Claude Code v2.1.63 confirms Anthropic’s rapid pace on CLI tools: HTTP hooks concretely broaden possible integrations, and the wave of memory fixes signals a serious stabilization effort for long-running workflows. Free memory, meanwhile, positions Claude in direct competition with ChatGPT for the general public.

On the models side, Qwen continues its range-coverage strategy: after the large MoE models in mid-February, the Small 0.8B–9B series targets the edge and lightweight agents — a segment still sparsely covered by Western labs in open-weight.


Sources

This document was translated from the French version into English using the gpt-5-mini model. For more information about the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator