Search

1M context window generally available for Claude, Genspark launches its first AI employee, Perplexity Computer on mobile

1M context window generally available for Claude, Genspark launches its first AI employee, Perplexity Computer on mobile

March 13, 2026 is marked by a structural announcement from Anthropic: the one-million token context window becomes generally available for Claude Opus 4.6 and Sonnet 4.6 at no extra cost. Genspark reimagines the workspace concept with its “first AI employee”, Claw, and simultaneously announces $385 million raised. Perplexity brings Computer to mobile, Claude Code receives two consecutive updates (v2.1.75 and v2.1.76), and GitHub, xAI, Runway and Kimi round out a dense period of announcements.


1M token context window generally available

March 13 — Anthropic announces that the one-million token context window is generally available for Claude Opus 4.6 and Claude Sonnet 4.6. Previously available in early access with an extra fee, the 1M context is now included at the standard rate, with no long-context premium.

FeatureDetail
PricingStandard pricing across the entire 1M window, no extra fee
Images / PDFsUp to 600 images or PDF pages (vs 100 previously)
Requests > 200K tokensWork automatically, no special beta header required
MRCR v2 benchmarkOpus 4.6: 78.3% at 1M tokens — top score among frontier models
Claude Code1M context included by default for Max, Team and Enterprise plans
AvailabilityClaude Platform, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry

Practically, loading an entire codebase, thousands of pages of contracts, or an agent’s full trace in a single conversation is now possible without extra configuration. Boris Cherny (head of Claude Code) confirmed using the 1M context exclusively for months and that it is now part of the default plans for Max, Team and Enterprise.

The 78.3% score on MRCR v2 (Multimodal Retrieval Coherence Rate) is particularly notable: it places Opus 4.6 first among frontier models on this benchmark that measures the ability to retrieve and use precise information within a very long window.

🔗 Official Anthropic blog


Genspark AI Workspace 3.0: “Your first AI employee”

March 13 — Genspark announces AI Workspace 3.0, a major conceptual pivot: the platform no longer presents itself as an AI-assisted workspace, but as a place where you hire AI agents to work for you. The startup reports 200millionannualrecurringrevenue(ARR)reachedinjust11months,andsimultaneouslyannouncesanexpandedSeriesBof200 million annual recurring revenue (ARR) reached in just 11 months**, and simultaneously announces an expanded **Series B of 385 million.

The centerpiece of this release is Genspark Claw: the platform’s first “AI employee.” It runs continuously in a Genspark Cloud Computer dedicated per user — an isolated, persistent cloud environment — and connects to WhatsApp, Telegram, Microsoft Teams and Slack to receive instructions and report task progress.

FeatureDescription
Genspark ClawAI “employee” agent, runs 24/7 in a dedicated Cloud Computer, accessible via WhatsApp/Telegram/Teams/Slack
Genspark Cloud ComputerIsolated, persistent cloud environment assigned to each user
Genspark WorkflowsAutomates repetitive tasks across ~20 apps (Google, Office, etc.) via templates or custom workflows
Genspark TeamsTeam messaging with integrated AI (DMs, groups, org discovery) — positioned as a free alternative to Slack
Meeting BotsBots that automatically join scheduled meetings, capture discussions, organize notes and send summaries
Speakly (iOS/Android)AI voice dictation with AI-assisted editing, available on mobile
Chrome ExtensionAI assistant in the Chrome sidebar with understanding of web page context

Workspace 3.0’s positioning is summarized as: “You don’t work with the AI anymore — you hire the AI to work for you.” The additional funding signals that the platform, which now covers end-to-end use cases (communication, automation, meetings, content creation), is entering an accelerated expansion phase.

🔗 Announcement tweet 🔗 Genspark blog


Perplexity Computer available on mobile

March 13 — Perplexity announces Computer is coming to mobile devices. The feature, previously limited to desktop platforms, is now accessible from smartphones. Users can start any task on their mobile device and control Computer directly from their phone — browse the web, run complex queries and automate tasks without being at a computer.

“Perplexity Computer is now on mobile. Start any task on any device. Manage Computer from your phone.” — @perplexity_ai on X

The announcement generated over 900,000 views within hours, a sign of strong interest in agent mobility. The same day, Perplexity announced integration of the NVIDIA Nemotron 3 Super model into its platform — accessible via the Perplexity interface, the Agent API and Computer — as part of the NVIDIA GTC conference.

🔗 Perplexity Computer mobile tweet 🔗 NVIDIA Nemotron 3 Super tweet


Claude Code v2.1.75 and v2.1.76

v2.1.75: 1M context by default and universal voice mode

March 13 — Claude Code v2.1.75 directly integrates the general availability of the 1M context and rolls out voice mode broadly.

FeatureDescription
1M context by defaultOpus 4.6 automatically enabled on Max, Team and Enterprise plans
100% voice mode rolloutFull deployment confirmed (after progressive rollout since March 3)
/colorSet a custom color for the prompt bar per session
/renameView and edit the session name in the prompt bar
File memory timestampsHelps Claude distinguish recent memories from older ones

Notable fixes: voice mode did not activate correctly on new installs; the Claude Code header did not update the model name after /model; session crashes with undefined values in attachments; the Bash tool altered the ! character in commands with pipes.

v2.1.76: MCP Elicitation, /effort and PostCompact hook

March 14 — v2.1.76 (commit 420a188) brings several advanced features for developers and enterprise admins.

FeatureDescription
MCP ElicitationMCP servers can request structured inputs during a task via an interactive dialog (form fields or browser URL)
Hooks Elicitation / ElicitationResultTo intercept and override responses to MCP requests
Flag -n / --nameSet a display name for the session at startup
worktree.sparsePathsFor large monorepos with claude --worktree, only checks necessary directories via git sparse-checkout
Hook PostCompactFires after context compaction
Command /effortSet the model’s effort level
feedbackSurveyRateEnterprise admins can configure the sampling rate of the session quality survey

Important fix: deferred tools (like ToolSearch) lost their input schemas after compaction, causing rejection of array and numeric parameters.

The Remote Control feature (launched March 13, relayed by Boris Cherny) also allows launching Claude Code sessions on your computer from your phone via claude remote-control — v2.1.76 fixes stabilize this feature (fixes for sessions dying silently, queued messages, WebSocket reconnection).

🔗 Claude Code CHANGELOG 🔗 Tweet voice mode + 1M 🔗 Tweet Remote Control


Claude Partner Network: $100M for partners

March 12 — Anthropic launches the Claude Partner Network, a program for organizations that help companies adopt Claude, backed by an initial $100 million commitment for 2026.

ComponentDetail
Investment$100M committed for 2026, with expected growth
Direct supportFunding for training, commercial activation, co-marketing
Partner team5× growth of the dedicated team
Partner portalAnthropic Academy materials, sales playbooks, co-marketing resources
CertificationClaude Certified Architect, Foundations — available at launch
Code Modernization KitFor migrating legacy codebases and reducing technical debt

Partners include Accenture (training 30,000 professionals on Claude), Deloitte, Cognizant and Infosys. Joining the network is free and open to any organization that resells Claude.

🔗 Official announcement


GitHub: Issue Fields and Copilot updates

March 12 — GitHub announces the public preview of Issue Fields: structured metadata fields for issues. Similar to Jira or Linear custom fields, they let you enrich issues with typed data (text, number, date, single or multi-select) to track business attributes directly in GitHub.

March 13 — Two Copilot updates round out the week:

  • Copilot Student plan: students with GitHub Education benefits are automatically migrated to this new dedicated plan.
  • Optional approval for Actions workflows: a new repository setting lets admins skip human approval when the Copilot coding agent opens a PR. The default behavior (approval required) remains — disabling is fully opt-in.

🔗 Issue Fields — public preview 🔗 Copilot Student plan 🔗 Copilot Agent — skip approval


In brief

xAI — Grok Imagine multi-image video: Grok Imagine can now create videos from up to 7 reference images (characters, locations, objects). Available on iOS, Android and Web since March 13. A March 14 update improves control over parameters and characters. 🔗 March 13 tweet | March 14 tweet

Google DeepMind — Platform 37 + The AI Exchange: On March 12, DeepMind revealed the name of its new London HQ: Platform 37, a nod to AlphaGo’s “Move 37” against Lee Sedol in 2016. Later in 2026, The AI Exchange will open in that building — a public space dedicated to exploring AI. 🔗 DeepMind announcement

Kimi: Kimi K2.5 becomes the default provider in BrowserOS, an open-source browser with an integrated AI agent capable of browsing the web, filling forms, logging into apps via MCP and running multi-step workflows (March 13). Zhilin Yang, founder and CEO of Moonshot AI, will present Kimi K2.5’s massive training strategies at NVIDIA GTC 2026 on March 17. 🔗 BrowserOS tweet | GTC tweet

Runway — Ethics of interactive AI characters: Three days after launching Runway Characters, the team published on March 12 a deep-dive on risks and safety measures for interactive AI avatars. Measures include banning child avatars and public figures without permission, mandatory transparency about AI nature, and a reporting portal for non-consensual use of likeness. The stance is notable: an interactive avatar can “overcome objections in real time,” which makes it fundamentally different from a static deepfake. 🔗 Runway article

Manus — First annual letter: On March 12, Manus published “Mens et Manus, Episode 1”, its first annual letter. It reviews the founding philosophy (“AI shouldn’t just think, it should also act”), user profiles that benefited most from the product, and sets goals for the coming year: broaden access and run the agent continuously. Manus has been integrated into Meta since December 2025. 🔗 Annual letter

Anthropic — Usage promotion ×2: From March 14 for two weeks, usage quotas are doubled outside peak hours (excluding 5am–11am PT weekdays / all day weekends) on all plans (Free, Pro, Max) and Claude Code. 🔗 Official tweet

OpenAI — Global Codex meetups: Codex ambassadors are organizing local meetups in several cities (Bengaluru on March 14, Mexico City on April 8) for hands-on workshops and workflow sharing. 🔗 Codex Meetups page


What this means

The general availability of the 1M context with no extra cost is the structural change of the week. By removing the premium on long windows, Anthropic lowers adoption friction for the use cases that need it most: large codebase analysis, document corpus processing, long-lived agents. The 78.3% MRCR v2 score confirms that performance does not degrade as you approach the million-token mark — which was the main obstacle to confident adoption of long-context windows. On Genspark’s side, the trajectory to 200MARRin11monthscombinedwitha200M ARR in 11 months — combined with a 385M raise — indicates real traction, not just a communication pivot. The concept of Claw as an “AI employee” accessible via everyday messaging channels (WhatsApp, Slack) is a serious attempt to make autonomous agents usable without technical friction.

The clustering of announcements around NVIDIA GTC (Perplexity + Nemotron, Kimi, DeepMind Platform 37) confirms that the conference remains a moment of acceleration for the AI ecosystem beyond just hardware.


Sources

This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator