Search

Anthropic acquires Vercept (computer use at 72.5%), Perplexity Computer orchestrates 19 models, GitHub Copilot CLI goes GA

Anthropic acquires Vercept (computer use at 72.5%), Perplexity Computer orchestrates 19 models, GitHub Copilot CLI goes GA

February 25, 2026 is a busy day: Anthropic acquires Vercept to accelerate Claude’s computer use capabilities (72.5% on OSWorld), Perplexity launches Computer — an agentic multi-model system that orchestrates 19 models in parallel — and GitHub Copilot CLI becomes generally available to all paid subscribers. Meanwhile, Google DeepMind unveils Genie 3 (interactive world models) and Intrinsic joins Google for industrial robotics.


Anthropic acquires Vercept: computer use climbs to 72.5% on OSWorld

February 25 — Anthropic announces the acquisition of Vercept, a startup specialized in perception and interaction of AI systems with software interfaces. Co-founders Kiana Ehsani, Luca Weihs and Ross Girshick join Anthropic to directly strengthen Claude’s computer use capabilities.

This acquisition is part of a rapid progression on the OSWorld benchmark, which measures an AI agent’s ability to complete tasks on a real operating system (navigating complex spreadsheets, filling multi-tab forms, etc.):

PeriodOSWorld Score
End of 2024 (computer use launch)< 15%
Claude Sonnet 4.6 (Feb. 2026)72.5%

A jump of more than 57 points in one year. Claude Sonnet 4.6 now approaches human-level performance on these desktop tasks. Vercept will discontinue its external product in the coming weeks to focus entirely on this work within Anthropic.

This acquisition follows the earlier Bun acquisition. Anthropic is thus building a portfolio of top engineering teams around agentic capabilities.

🔗 Anthropic acquires Vercept


Perplexity Computer: an agentic multi-model orchestrator (19 models)

February 25 — Perplexity launches Computer, a general-purpose AI system designed to execute end-to-end projects. The user describes a final objective, Computer breaks the work into subtasks, creates specialized sub-agents and executes in parallel — for hours or months if needed.

Operation is asynchronous and isolated: each task runs in a dedicated environment with access to a real filesystem, a real browser, and integrations with real tools (APIs, web search, document generation).

What sets Perplexity Computer apart is its massively multi-model orchestration — 19 models available, each assigned according to its strengths:

ModelRole
Claude Opus 4.6Primary orchestrator (core reasoning)
GeminiDeep research, sub-agent creation
ChatGPT 5.2Long-context recall and wide search
GrokLightweight tasks (speed)
Veo 3.1Video generation
Nano BananaImage generation

The harness is model-agnostic: models can be swapped as they evolve. Perplexity’s philosophy is that models specialize rather than commoditize, making multi-model orchestration more efficient than a single model.

Availability: currently web access for Perplexity Max subscribers. A rollout to Perplexity Pro, Enterprise and Enterprise Max is planned. Pricing is usage-based with configurable spending caps.

🔗 Introducing Perplexity Computer


GitHub Copilot CLI goes generally available

February 25 — Announced in public preview in September 2025, GitHub Copilot CLI is now generally available to all paid Copilot subscribers (Pro, Pro+, Business, Enterprise). Hundreds of improvements have been added since preview.

Copilot CLI is now a full agentic development environment from the terminal:

FeatureDetail
Plan mode (Shift+Tab)Analyzes the request, asks clarifying questions, builds a structured plan before writing code
Autopilot modeEnd-to-end autonomous execution without interruption
Background delegationPrefix & → delegates to the cloud agent, frees the terminal; /resume to resume
Multi-modelClaude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5, GPT-5.3-Codex, Gemini 3 Pro
MCP integratedGitHub MCP server built-in + support MCP custom
Plugins/plugin install owner/repo — bundle MCP, agents, skills, hooks
Agent SkillsMarkdown files → specialized workflows, shared with Copilot agent and VS Code
Agents customVia wizard or .agent.md files
HookspreToolUse / postToolUse for policies and post-processing
Auto-compactionAutomatic compression to 95% of context → infinite sessions
Repository memoryRemembers conventions across sessions

Note for Business and Enterprise organizations: an administrator must enable Copilot CLI from the Policies page.

🔗 GitHub Copilot CLI is now generally available


Claude Cowork: scheduled tasks, Customize tab, available on Windows

February 25 — Anthropic announces several additions to Claude Cowork (research preview):

FeatureDetail
Scheduled tasksClaude runs recurring tasks automatically (morning briefing, weekly spreadsheet updates, Friday presentations)
Customize tabNew tab in the sidebar to manage plugins, skills and connectors from one place
Expansion WindowsCowork is now available on macOS and Windows (all paid Claude plans)

Cowork provides access to local files, connectors (Slack, Notion, Figma) and Claude in Chrome for web browsing.

🔗 Thread @claudeai


Google DeepMind — Genie 3: interactive world models

February 25 — Google DeepMind publishes a Q&A with the co-leads of Project Genie, its experimental world model prototype. With Genie 3, a simple image or text is enough to generate an interactive environment navigable in real time — without a game engine.

The conceptual difference with an LLM is central: where an LLM predicts the next word, a world model predicts what happens in the environment in response to an agent’s actions. Genie 3 simulates a full space moment by moment, accounting for physical properties (bounce, reflection, rain).

Planned applications: training AI agents in safe simulated environments, immersive education (explore ancient Rome), prototyping games and films.

Project Genie is available for Google AI Ultra subscribers in the United States (18+).

🔗 Ask a Techspert: What’s a world model?


Intrinsic joins Google — industrial robotics and physical AI

February 25 — Alphabet announces that Intrinsic, one of its “Other Bets” founded in 2021, is joining Google. Intrinsic develops industrial robotics platforms enabled by AI — tools to build, deploy and manage complex robotic applications.

This integration into Google aims to accelerate the development of physical AI — artificial intelligence applied to the physical world. The move will allow Intrinsic to leverage Google’s AI resources to help industrial companies adapt more quickly.

🔗 Intrinsic, an Alphabet Other Bet, is joining Google


Codex CLI v0.105.0: syntax highlighting, voice dictation, multi-agent CSV

~February 25 — Notable new version of OpenAI’s Codex CLI:

FeatureDescription
Syntax highlightingSyntax coloring in the TUI, colored diffs, theme selector /theme with live preview
Voice dictationHold the spacebar to record and transcribe a command
spawn_agents_on_csvMulti-agent fan-out from a CSV with progress tracking and ETA
/copyCopies the last full response
/clear / Ctrl-LClears the screen without losing thread context
Granular approvalsSelective rejection by prompt type without disabling all approvals
npm install -g @openai/codex@0.105.0

🔗 Codex changelog


Samsung Galaxy S26: Gemini multi-step tasks and on-device Scam Detection

February 25 — At Galaxy Unpacked 2026, Google and Samsung announce three new Gemini features on the Galaxy S26, powered by the Gemini 3 series models:

FeatureDetail
Gemini multi-step tasks (beta)Long-press the side button → Gemini delegates in the background (groceries, taxis, deliveries). US + Korea at launch.
Circle to Search multi-objectIdentify multiple items in an image in a single search. Virtual try-on included.
Scam Detection on-deviceReal-time scam detection during phone calls via a local Gemini model. Audio + haptic alerts. Automatically disabled for contacts.

These features will also be available on Pixel 10 and Pixel 10 Pro.

🔗 A more intelligent Android on Samsung Galaxy S26


OpenAI Responses API: docx, pptx, csv, xlsx support

February 24 — OpenAI’s Responses API now supports new input file types: docx, pptx, csv, xlsx and other office formats. Agents can directly consume professional documents to enrich context and produce more accurate answers.

🔗 Tweet @OpenAIDevs


In brief

Claude Opus 3 retires — and launches a Substack. Anthropic announces the retirement of Claude Opus 3 while keeping public access (an unusual approach). Claude Opus 3 will publish a Substack blog for at least 3 months — the first post is titled “Greetings from the Other Side (of the AI Frontier)”. 🔗 Tweet @AnthropicAI

NVIDIA: 70% of healthcare organizations use AI. In its annual “State of AI in Healthcare and Life Sciences” 2026 report, NVIDIA finds that 70% of respondents report actively using AI (vs 63% in 2024), 69% use GenAI/LLM (vs 54%), and 85% of leaders see a positive impact on revenue. 🔗 Blog NVIDIA

OpenAI publishes its report on malicious uses of AI. The document presents case studies of malicious actors combining AI models with traditional tools. A Chinese influence operator is cited as an example. 🔗 Disrupting malicious uses of AI

OpenAI names Arvind KC Chief People Officer. KC joins from Roblox, Google, Palantir and Meta. His role: support OpenAI’s growth toward an AI-augmented work model. 🔗 Announcement

Claude Code v2.1.53 to v2.1.58. Several stability releases: fix for BashTool on Windows (EINVAL), fix VS Code “command not found”, fix UI flicker, fix worktrees ignored at first launch, Windows and ARM64 crash fixes. 🔗 CHANGELOG


What this means

February 25 illustrates a convergence around agentic systems. Three major announcements — Vercept, Perplexity Computer, Copilot CLI GA — all push in the same direction: AI systems that plan, delegate and execute complete workflows without constant supervision.

Anthropic’s acquisition of Vercept is particularly significant. The OSWorld score rising from under 15% to 72.5% in one year represents a qualitative change: Claude no longer navigates interfaces like a prototype, it approaches human performance on real desktop tasks. Integrating a team specialized in visual perception for software interfaces accelerates this trajectory.

Perplexity Computer and GitHub Copilot CLI represent two different models of agentic systems: one cloud-orchestrated (Perplexity, multi-model, asynchronous), the other terminal-native (Copilot CLI, MCP, hooks, plugins). These two models will coexist and will likely converge.

On Google’s side, Intrinsic’s integration and Genie 3’s advances signal growing investment in physical AI — AI that interacts with the physical world rather than just text. This field was still quiet 18 months ago.


Sources

This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator