Search

Anthropic + Amazon 5 GW $5B, GitHub Copilot restructures its plans, Kimi K2.6 SOTA open-source, Qwen3.6-Max-Preview, Codex Chronicle

Anthropic + Amazon 5 GW $5B, GitHub Copilot restructures its plans, Kimi K2.6 SOTA open-source, Qwen3.6-Max-Preview, Codex Chronicle

April 20, 2026 brings several major announcements together: Anthropic and Amazon expand their partnership around 5 GW of compute capacity and a $5B investment, GitHub suspends new signups for individual Copilot plans while tightening usage limits, and two new models push coding benchmarks higher โ€” Kimi K2.6 in open-source and Qwen3.6-Max-Preview in proprietary. OpenAI also launches Codex Chronicle, a contextual memory based on screenshots.


Anthropic and Amazon โ€” 5 GW of compute, $5B investment

April 20 โ€” Anthropic and Amazon have signed a new infrastructure agreement that secures up to 5 gigawatts (GW) of compute capacity for training and deploying Claude. Significant Trainium2 capacity is coming as early as Q2 2026, with nearly 1 GW total of Trainium2 and Trainium3 expected by the end of 2026.

The agreement is structured around three pillars:

PillarDetail
InfrastructureCommitment of more than USD 100 billion over 10 years to AWS, covering Graviton and Trainium2 through Trainium4
Claude Platform on AWSThe full Claude Platform integrated directly into AWS โ€” same account, same billing, no separate contracts
InvestmentAmazon is investing $5 billion USD in Anthropic today, with up to $20 billion USD additional in the future

Anthropic reveals that its annualized revenue now exceeds 30 billion USD, up from around 9 billion at the end of 2025. This rapid growth has put pressure on existing infrastructure, affecting reliability for free, Pro, Max, and Team users during peak hours.

More than 100,000 customers are already running Claude on Amazon Bedrock, and Anthropic uses more than one million Trainium2 chips. The full Claude Platform is now directly accessible from the AWS account without requiring separate contracts โ€” a concrete change for developers and companies already on AWS.

Claude also remains the only frontier AI model available on the three major global cloud platforms: AWS (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry).

โ€œOur users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand.โ€ โ€” Dario Amodei, via @AnthropicAI on X

๐Ÿ”— Official Anthropic announcement


GitHub Copilot โ€” Suspension of Pro/Pro+ signups and tighter limits

April 20 โ€” GitHub publishes significant changes to individual Copilot plans. New signups for Pro, Pro+, and Student plans are suspended immediately. Copilot Free remains open.

The changes in detail:

ChangeDetail
Signup pausePro, Pro+, Student โ€” closed to new users. Copilot Free remains open
Usage limitsPro+ offers 5ร— the limits of Pro. Warnings in VS Code and Copilot CLI as the limit approaches
Opus modelsOpus removed from Copilot Pro. Opus 4.7 remains available on Pro+ only. Opus 4.5 and 4.6 will also be removed from Pro+ later
RefundOption to cancel and be refunded for April by contacting support between April 20 and May 20, 2026

For users already subscribed, existing plans remain active. The practical impact: Pro users who had access to Opus models lose that access, while Pro+ effectively becomes the middle tier with 5ร— more quota. The refund window until May 20 leaves one month to decide.

๐Ÿ”— GitHub Changelog โ€” Individual plan changes


Kimi K2.6 โ€” SOTA open-source in coding and agents

April 20 โ€” Moonshot AI launches Kimi K2.6, a new open-source model that sets several state-of-the-art records on coding and agent benchmarks. The announcement generated more than 1.5 million views in a few hours.

Open-source benchmarks:

BenchmarkK2.6 score
SWE-Bench Pro58.6
SWE-bench Multilingual76.7
HLE with tools54.0
BrowseComp83.2
Toolathlon50.0
Charxiv w/ python86.7
Math Vision w/ python93.2

The improvements over K2.5 are substantial. For long-horizon coding, K2.6 chains up to 4,000 tool calls in a single session and can run continuously for more than 12 hours, with multi-language generalization (Rust, Go, Python) and multi-task generalization (frontend, DevOps, performance optimization).

The multi-agent architecture has also improved: 300 parallel sub-agents ร— 4,000 steps per run, compared with 100 sub-agents ร— 1,500 steps for K2.5. K2.6 natively supports advanced frontend interfaces โ€” videos in hero sections, WebGL shaders, GSAP + Framer Motion animations, Three.js 3D renders.

The weights are available open-source on HuggingFace. The API is accessible on platform.moonshot.ai, and the model runs in chat mode and agent mode on kimi.com.

๐Ÿ”— Kimi K2.6 announcement on X โ€” ๐Ÿ”— Kimi K2.6 blog โ€” ๐Ÿ”— Weights on HuggingFace


Qwen3.6-Max-Preview โ€” Preview of the next flagship model

April 20 โ€” Alibaba Qwen launches Qwen3.6-Max-Preview, a preview of its next flagship proprietary model, successor to Qwen3.6-Plus.

Benchmark gains versus Qwen3.6-Plus:

BenchmarkGain
SkillsBench+9.9
SciCode+6.3
NL2Repo+5.0
Terminal-Bench 2.0+3.8
QwenChineseBench+5.3
SuperGPQA+2.3
ToolcallFormatIFBench+2.8

The model ranks first on 6 major coding benchmarks: SWE-bench Pro, Terminal-Bench 2.0, SkillsBench, QwenClawBench, QwenWebBench, and SciCode. It is available today in early access via chat.qwen.ai and via the Alibaba Cloud Model Studio API under the identifier qwen3.6-max-preview. It is compatible with OpenAI and Anthropic specs, and supports preserve_thinking mode for agentic tasks.

This is still an actively developing version โ€” additional iterations are announced before the final release.

๐Ÿ”— Qwen announcement on X โ€” ๐Ÿ”— Qwen3.6-Max-Preview blog


Codex Chronicle (Research Preview) โ€” Contextual memory via screenshots

April 20 โ€” OpenAI launches Codex Chronicle in Research Preview, a new contextual memory feature for Codex. Chronicle runs agents in the background to build memories from recent screenshots, allowing Codex to resume a work session without the user having to manually refresh the context.

How it works: agents periodically capture the screen, extract work context, and store memories locally on the device. The user can inspect and edit these memories. OpenAI warns that other applications may potentially access the screenshot files.

Limited availability at launch: reserved for Pro users on macOS, excluded from the EU, UK, and Switzerland during the learning phase.

๐Ÿ”— Codex Chronicle announcement on X


Grok โ€” Smarter video extensions

April 20 โ€” Grok Imagine announces improved video extensions (Smarter Video Extensions). Grok now sees the original prompt and the source clip to generate extensions that are coherent in both content and audio. Audio continuity is maintained throughout the extension. Available in the Grok app and on the web.

๐Ÿ”— Grok announcement on X


NotebookLM โ€” Custom covers for notebooks

April 16 โ€” NotebookLM now lets you add a custom cover illustration and a description to any notebook. The feature aims to personalize the grid display before sharing a notebook. Recommended format: 16:9 image.

๐Ÿ”— NotebookLM announcement on X


In brief

TeenAegis AI Danger Index (Apr 18) โ€” In its first AI danger index (AI Danger Index), TeenAegis gave OpenAI the lowest risk score among the providers evaluated. The index assesses protection for young users: age-appropriate controls, supervision, reporting, and safeguards. ๐Ÿ”— OpenAI Newsroom tweet

OpenAI Academy โ€” 3M+ users (Apr 19) โ€” OpenAIโ€™s AI training platform surpasses 3 million users, with in-person events held this week from Warsaw to Abilene, Texas, including Cal State Bakersfield. ๐Ÿ”— OpenAI Newsroom tweet


What this means

The Anthropic-Amazon deal is significant on several levels: it formalizes a mutual dependency (Anthropic needs the compute, Amazon needs Claude to keep Bedrock competitive), and unified AWS billing removes real friction for teams already in the Amazon ecosystem. The $30B annualized run-rate figure, multiplied by more than 3 in just a few months, explains why the infrastructure is under pressure and why this agreement was urgent.

On the model side, April 20 illustrates a rapid narrowing of gaps: Kimi K2.6 in open-source reaches scores comparable to proprietary models released only a few weeks ago, and Qwen3.6-Max-Preview puts Alibaba at the top of coding benchmarks even before the final version. GitHubโ€™s Copilot plan restructuring signals tension between broad adoption (Free) and sustainable monetization (Pro+), with a user experience that fragments across tiers.


Sources

This document was translated from the fr version into the en language using the gpt-5.4-mini model. For more information about the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator