April 14, 2026 marks a busy day for AI-assisted development tools: Anthropic launches routines in Claude Code, making it possible to automate entire workflows on a schedule or via webhook, without keeping your computer on. Google DeepMind releases Gemini Robotics-ER 1.6 with new industrial perception capabilities developed with Boston Dynamics. Z.ai opens GLM-5.1 under the MIT license, ranked number 1 among open source models on SWE-Bench Pro. GitHub Copilot adds three useful features: three-click conflict resolution, US/EU data residency, and model selection for third-party agents.
Routines in Claude Code โ research preview
April 14 โ Anthropic launches routines in Claude Code in research preview. A routine is an automation configured once โ with a prompt, a repository, and connectors โ which then runs autonomously, without the user staying connected.
Three trigger types are available:
| Type | Trigger | Example use |
|---|---|---|
| Scheduled (Scheduled) | Cron (hourly, nightly, weekly) | Nightly triage of Linear bugs, opening a fix PR |
| API | HTTP POST call to a dedicated endpoint | Datadog alert โ automatic triage + fix draft |
| Webhook | GitHub events (PR, pushโฆ) | Automatic code review on every opened PR |
Each routine has its own endpoint and authentication token. API routines integrate into any existing pipeline (alerts, deployment hooks, internal tools). Webhook routines start a new session for each PR matching the defined filters, and feed the session with subsequent updates (comments, CI failures).
โConfigure a routine once (a prompt, a repo, and your connectors), and it can run on a schedule, from an API call, or in response to an event. Routines run on our web infrastructure, so you donโt have to keep your laptop open.โ โ Configure a routine once (one prompt, a repository, and your connectors), and it can run on a schedule, from an API call, or in response to an event. Routines run on our web infrastructure, so thereโs no need to keep your computer on.
Availability and limits:
| Plan | Routines/day |
|---|---|
| Pro | 5 |
| Max | 15 |
| Team / Enterprise | 25 |
Available on all paid plans (Pro, Max, Team, Enterprise) with Claude Code web enabled. Beyond the quotas, additional usage remains possible. Routines consume subscription credits like interactive sessions.
Documented use cases:
- Backlog management: nightly triage, labeling, Slack summary
- Documentation drift (docs drift): weekly scan of merged PRs, detection of pages to update
- Post-deployment verification: smoke checks after each release
- SDK porting: every merged Python PR automatically triggers a port to the Go SDK
๐ Anthropic Blog ๐ Announcement tweet
Claude Code v2.1.105 โ PreCompact hooks, plugin monitors, /proactive
April 11 to 13 โ Version 2.1.105 of Claude Code brings several notable improvements:
| Feature | Description |
|---|---|
Parameter path for EnterWorktree | Allows switching to an existing worktree of the current repository |
| PreCompact Hook | Hooks can now block compaction (exit code 2 or {"decision":"block"}) |
| Background monitors for plugins | monitors key in the plugin manifest โ armed automatically at session start |
/proactive | New alias for /loop |
| Dropping blocked Streams API calls | Drop after 5 minutes without data + retry in non-streaming mode |
| Network error messages | Immediate display of a retry message instead of a silent spinner |
| Long file display | Very long single-line writes (for example minified JSON) are truncated in the interface |
Improved /doctor | Status icons + f key to ask Claude to fix detected issues |
April 14 โ Version 2.1.107 brings an interface improvement: thinking hints now appear earlier during long operations, reducing the feeling of waiting without visual feedback.
Anthropic โ Vas Narasimhan joins the board of directors
April 14 โ Anthropicโs Long-Term Benefit Trust (LTBT) has appointed Vas Narasimhan to the board of directors. A physician-scientist and CEO of Novartis, he has overseen the development and approval of more than 35 innovative medicines in one of the most regulated sectors in the world.
With this appointment, the Trust-appointed directors now make up the majority of the board. The LTBT is an independent body whose members have no financial interest in Anthropic โ its role is to maintain the balance between commercial success and the long-term public benefit mission.
Gemini Robotics-ER 1.6 โ industrial perception and safety
April 14 โ Google DeepMind releases Gemini Robotics-ER 1.6, an update to its embodied reasoning model for robotics. The model improves visual and spatial understanding to allow robots to plan and execute real-world tasks with greater autonomy. It outperforms Gemini Robotics-ER 1.5 and Gemini 3.0 Flash on internal robotics benchmarks.
New capabilities:
| Capability | Description |
|---|---|
| Pointing | Object detection and counting, relational logic (smaller/larger), trajectories and grasp points, complex constraints |
| Multi-view success detection | Analyzes multiple camera angles to verify that a task is truly completed |
| Instrument reading | Reads circular gauges and transparent tubes (sight glasses) โ developed with Boston Dynamics for industrial inspection |
| Safety (ASIMOV v2 benchmark) | Best score among all tested models on following safety instructions |
The instrument-reading capability emerged from collaboration with Boston Dynamics for the Spot robot, used in industrial facility inspections. It combines spatial reasoning and code execution to interpret pressure gauges with high precision.
Availability: Gemini API (gemini-robotics-er-1.6-preview), Google AI Studio, and a starter notebook on GitHub Colab.
๐ Google DeepMind Blog ๐ Announcement tweet
GLM-5.1 โ Z.ai opens its agentic model under the MIT license
April 7 (caught-up announcement โ missed during last weekโs scan) โ Z.ai (formerly ZhipuAI) has released GLM-5.1, its new flagship model for agentic coding, available open source under the MIT license.
Code benchmark performance:
| Benchmark | GLM-5.1 | GLM-5 | Claude Opus 4.6 | GPT-5.4 | Gemini 3.1 Pro |
|---|---|---|---|---|---|
| SWE-Bench Pro | 58.4 | 55.1 | 57.3 | 57.7 | 54.2 |
| NL2Repo | 42.7 | 35.9 | 49.8 | 41.3 | 33.4 |
| Terminal-Bench 2.0 | 63.5 | 56.2 | 65.4 | โ | 68.5 |
GLM-5.1 ranks number 1 in open source and third worldwide on SWE-Bench Pro, Terminal-Bench, and NL2Repo.
The key difference: long horizon. Previous models, including GLM-5, quickly improve their performance at first, then plateau. GLM-5.1 is designed to remain effective on agentic tasks over much longer horizons: it can work autonomously for 8 hours, refining its strategies over thousands of tool calls.
Three scenarios illustrate this capability:
- Vector database optimization over 600 iterations: GLM-5.1 reaches 21,500 requests per second on VectorDBBench, 6 times the best result obtained in a 50-turn session.
- GPU kernel optimization over 1,000+ turns: 3.6x acceleration on KernelBench Level 3.
- Building a Linux desktop in 8 hours: from a simple natural-language prompt, GLM-5.1 produces a complete desktop environment in the browser (file manager, terminal, editor, system monitor).
Availability: open source weights on HuggingFace (zai-org/GLM-5.1), API on api.z.ai and BigModel.cn, compatible with Claude Code, Cline, Roo Code, Kilo Code, and OpenCode.
๐ GLM-5.1 Blog ๐ Announcement tweet
Codex CLI v0.120.0 โ real-time agent streaming
April 11 โ Version 0.120.0 of Codex CLI is released as a stable version. It brings several functional improvements:
| Feature | Detail |
|---|---|
| Realtime V2 | Streams background agent progress in real time, queues subsequent responses |
| Improved TUI hooks | Active hooks are shown separately, the history of completed hooks is simplified |
| Thread title in status | Custom TUI statuses can include the renamed thread title |
| code-mode output schema | code-mode tool declarations now include outputSchema MCP details |
| SessionStart hooks | Distinguishes sessions created by /clear from starts or resumes |
The release also includes several bug fixes: handling elevated Windows sandboxes, panics during TLS WebSocket connections, preserving the order of tool search results.
๐ Release v0.120.0
GitHub Copilot โ three new features
Model selection for third-party agents
April 14 โ It is now possible to choose the model when launching a task with the Claude (Anthropic) and Codex (OpenAI) agents on github.com.
| Agent | Available models |
|---|---|
| Claude | Claude Sonnet 4.6, Claude Opus 4.6, Claude Sonnet 4.5, Claude Opus 4.5 |
| Codex | GPT-5.2-Codex, GPT-5.3-Codex, GPT-5.4 |
Included with the existing Copilot subscription (Business or Enterprise), but the administrator must enable the corresponding policies at the company or organization level.
๐ Model selection changelog
Three-click merge conflict resolution
April 13 โ A new โFix with Copilotโ button appears on pull requests with merge conflicts. In three clicks, the Copilot cloud agent resolves the conflicts, verifies that the build and tests pass, then pushes from its isolated cloud environment. The @copilot mention in PRs also makes it possible to fix failing GitHub Actions workflows or address code review comments. Available on all paid Copilot plans.
๐ Merge conflicts changelog
US/EU data residency and FedRAMP compliance
April 13 โ GitHub Copilot now supports data residency for the US and EU regions: all associated inference and data remain in the designated geographic area. US government customers also benefit from FedRAMP Moderate compliance. Data-resident requests carry a 10% premium request multiplier surcharge. Gemini models are not yet supported (GCP does not yet offer inference endpoints with data residency). Japan and Australia are on the roadmap for 2026.
Generative media โ Runway, Luma, MiniMax, ElevenLabs
Runway Characters in video calls
April 14 โ Runway rolls out an update to Characters allowing your AI avatar to join a Zoom, Google Meet, or Teams video call. The process: choose or create a Character โ paste the meeting link โ click โJoin Meetingโ. The feature, initially available as an API for developers since March 9, is now accessible to all users from the Runway app.
๐ Runway tweet
Luma โ voice dictation and logo animation
April 14 โ Luma Labs launches two new features: voice dictation in its app (the user speaks, the description is converted into a generation prompt) and cinematic logo animation (upload your logo, and the agent produces an animated branding-oriented intro).
๐ Voice dictation tweet ยท Logo animation tweet
MiniMax โ three open source Music Skills for agents
April 14 โ MiniMax opens three Music Skills for agents as open source: minimax-music-gen (generation of a full track from a prompt, with automatic choice between original, instrumental, and cover), buddy-sings (the AI agent sings as a vocal companion), and Playlist curation (playlist curation from the userโs library). These components are intended for integration into M2.7 agents.
๐ MiniMax tweet
ElevenLabs โ $100 million in net recurring revenue in Q1 2026
April 13 โ CEO Mati Staniszewski announces that ElevenLabs added more than $100 million in net annual recurring revenue in Q1 2026 โ their best quarter to date. Growth driven by enterprise partnerships (Klarna, Revolut, Deutsche Telekom, Toyota).
๐ ElevenLabs CEO tweet
What this means
Routines in Claude Code represent a paradigm shift: the development tool no longer only responds to interactive requests, it can now take planned or reactive initiatives within a projectโs infrastructure. The combination of scheduled + webhook turns Claude Code into a permanent agent on a repository, with minimal setup cost.
On the open source front, GLM-5.1 confirms that Chinese agentic models have reached the level of the best proprietary models on coding benchmarks. The ability to sustain an 8-hour horizon of autonomous work โ with thousands of tool calls โ opens up concrete possibilities for intensive optimization tasks that traditional models cannot handle in a single session.
Gemini Robotics-ER 1.6 illustrates a different trend: general-purpose AI models adapted to the physical constraints of the real world, with software and hardware collaboration (Boston Dynamics/Spot) producing new capabilities such as reading industrial instruments.
Sources
- Anthropic Blog โ Claude Code Routines
- Tweet @claudeai โ Routines
- Claude Code CHANGELOG
- Anthropic โ Vas Narasimhan Nomination
- Google DeepMind Blog โ Gemini Robotics-ER 1.6
- Tweet @GoogleDeepMind โ Gemini Robotics-ER 1.6
- Z.ai Blog โ GLM-5.1
- Tweet @Zai_org โ GLM-5.1
- Codex CLI v0.120.0 Release
- GitHub Changelog โ Third-party agent model selection
- GitHub Changelog โ Copilot merge conflicts
- GitHub Changelog โ US/EU FedRAMP data residency
- Tweet @runwayml โ Video call characters
- Tweet @LumaLabsAI โ Voice dictation
- Tweet @LumaLabsAI โ Logo animation
- Tweet @MiniMax_AI โ Music Skills
- Tweet CEO ElevenLabs โ 100M ARR
This document was translated from the fr version into en using the gpt-5.4-mini model. For more information about the translation process, visit https://gitlab.com/jls42/ai-powered-markdown-translator