Search

Claude Code routines, Gemini Robotics-ER 1.6, GLM-5.1 open source

Claude Code routines, Gemini Robotics-ER 1.6, GLM-5.1 open source

April 14, 2026 marks a busy day for AI-assisted development tools: Anthropic launches routines in Claude Code, making it possible to automate entire workflows on a schedule or via webhook, without keeping your computer on. Google DeepMind releases Gemini Robotics-ER 1.6 with new industrial perception capabilities developed with Boston Dynamics. Z.ai opens GLM-5.1 under the MIT license, ranked number 1 among open source models on SWE-Bench Pro. GitHub Copilot adds three useful features: three-click conflict resolution, US/EU data residency, and model selection for third-party agents.


Routines in Claude Code โ€” research preview

April 14 โ€” Anthropic launches routines in Claude Code in research preview. A routine is an automation configured once โ€” with a prompt, a repository, and connectors โ€” which then runs autonomously, without the user staying connected.

Three trigger types are available:

TypeTriggerExample use
Scheduled (Scheduled)Cron (hourly, nightly, weekly)Nightly triage of Linear bugs, opening a fix PR
APIHTTP POST call to a dedicated endpointDatadog alert โ†’ automatic triage + fix draft
WebhookGitHub events (PR, pushโ€ฆ)Automatic code review on every opened PR

Each routine has its own endpoint and authentication token. API routines integrate into any existing pipeline (alerts, deployment hooks, internal tools). Webhook routines start a new session for each PR matching the defined filters, and feed the session with subsequent updates (comments, CI failures).

โ€œConfigure a routine once (a prompt, a repo, and your connectors), and it can run on a schedule, from an API call, or in response to an event. Routines run on our web infrastructure, so you donโ€™t have to keep your laptop open.โ€ โ€” Configure a routine once (one prompt, a repository, and your connectors), and it can run on a schedule, from an API call, or in response to an event. Routines run on our web infrastructure, so thereโ€™s no need to keep your computer on.

Availability and limits:

PlanRoutines/day
Pro5
Max15
Team / Enterprise25

Available on all paid plans (Pro, Max, Team, Enterprise) with Claude Code web enabled. Beyond the quotas, additional usage remains possible. Routines consume subscription credits like interactive sessions.

Documented use cases:

  • Backlog management: nightly triage, labeling, Slack summary
  • Documentation drift (docs drift): weekly scan of merged PRs, detection of pages to update
  • Post-deployment verification: smoke checks after each release
  • SDK porting: every merged Python PR automatically triggers a port to the Go SDK

๐Ÿ”— Anthropic Blog ๐Ÿ”— Announcement tweet


Claude Code v2.1.105 โ€” PreCompact hooks, plugin monitors, /proactive

April 11 to 13 โ€” Version 2.1.105 of Claude Code brings several notable improvements:

FeatureDescription
Parameter path for EnterWorktreeAllows switching to an existing worktree of the current repository
PreCompact HookHooks can now block compaction (exit code 2 or {"decision":"block"})
Background monitors for pluginsmonitors key in the plugin manifest โ€” armed automatically at session start
/proactiveNew alias for /loop
Dropping blocked Streams API callsDrop after 5 minutes without data + retry in non-streaming mode
Network error messagesImmediate display of a retry message instead of a silent spinner
Long file displayVery long single-line writes (for example minified JSON) are truncated in the interface
Improved /doctorStatus icons + f key to ask Claude to fix detected issues

April 14 โ€” Version 2.1.107 brings an interface improvement: thinking hints now appear earlier during long operations, reducing the feeling of waiting without visual feedback.

๐Ÿ”— Claude Code Changelog


Anthropic โ€” Vas Narasimhan joins the board of directors

April 14 โ€” Anthropicโ€™s Long-Term Benefit Trust (LTBT) has appointed Vas Narasimhan to the board of directors. A physician-scientist and CEO of Novartis, he has overseen the development and approval of more than 35 innovative medicines in one of the most regulated sectors in the world.

With this appointment, the Trust-appointed directors now make up the majority of the board. The LTBT is an independent body whose members have no financial interest in Anthropic โ€” its role is to maintain the balance between commercial success and the long-term public benefit mission.

๐Ÿ”— Anthropic announcement


Gemini Robotics-ER 1.6 โ€” industrial perception and safety

April 14 โ€” Google DeepMind releases Gemini Robotics-ER 1.6, an update to its embodied reasoning model for robotics. The model improves visual and spatial understanding to allow robots to plan and execute real-world tasks with greater autonomy. It outperforms Gemini Robotics-ER 1.5 and Gemini 3.0 Flash on internal robotics benchmarks.

New capabilities:

CapabilityDescription
PointingObject detection and counting, relational logic (smaller/larger), trajectories and grasp points, complex constraints
Multi-view success detectionAnalyzes multiple camera angles to verify that a task is truly completed
Instrument readingReads circular gauges and transparent tubes (sight glasses) โ€” developed with Boston Dynamics for industrial inspection
Safety (ASIMOV v2 benchmark)Best score among all tested models on following safety instructions

The instrument-reading capability emerged from collaboration with Boston Dynamics for the Spot robot, used in industrial facility inspections. It combines spatial reasoning and code execution to interpret pressure gauges with high precision.

Availability: Gemini API (gemini-robotics-er-1.6-preview), Google AI Studio, and a starter notebook on GitHub Colab.

๐Ÿ”— Google DeepMind Blog ๐Ÿ”— Announcement tweet


GLM-5.1 โ€” Z.ai opens its agentic model under the MIT license

April 7 (caught-up announcement โ€” missed during last weekโ€™s scan) โ€” Z.ai (formerly ZhipuAI) has released GLM-5.1, its new flagship model for agentic coding, available open source under the MIT license.

Code benchmark performance:

BenchmarkGLM-5.1GLM-5Claude Opus 4.6GPT-5.4Gemini 3.1 Pro
SWE-Bench Pro58.455.157.357.754.2
NL2Repo42.735.949.841.333.4
Terminal-Bench 2.063.556.265.4โ€”68.5

GLM-5.1 ranks number 1 in open source and third worldwide on SWE-Bench Pro, Terminal-Bench, and NL2Repo.

The key difference: long horizon. Previous models, including GLM-5, quickly improve their performance at first, then plateau. GLM-5.1 is designed to remain effective on agentic tasks over much longer horizons: it can work autonomously for 8 hours, refining its strategies over thousands of tool calls.

Three scenarios illustrate this capability:

  • Vector database optimization over 600 iterations: GLM-5.1 reaches 21,500 requests per second on VectorDBBench, 6 times the best result obtained in a 50-turn session.
  • GPU kernel optimization over 1,000+ turns: 3.6x acceleration on KernelBench Level 3.
  • Building a Linux desktop in 8 hours: from a simple natural-language prompt, GLM-5.1 produces a complete desktop environment in the browser (file manager, terminal, editor, system monitor).

Availability: open source weights on HuggingFace (zai-org/GLM-5.1), API on api.z.ai and BigModel.cn, compatible with Claude Code, Cline, Roo Code, Kilo Code, and OpenCode.

๐Ÿ”— GLM-5.1 Blog ๐Ÿ”— Announcement tweet


Codex CLI v0.120.0 โ€” real-time agent streaming

April 11 โ€” Version 0.120.0 of Codex CLI is released as a stable version. It brings several functional improvements:

FeatureDetail
Realtime V2Streams background agent progress in real time, queues subsequent responses
Improved TUI hooksActive hooks are shown separately, the history of completed hooks is simplified
Thread title in statusCustom TUI statuses can include the renamed thread title
code-mode output schemacode-mode tool declarations now include outputSchema MCP details
SessionStart hooksDistinguishes sessions created by /clear from starts or resumes

The release also includes several bug fixes: handling elevated Windows sandboxes, panics during TLS WebSocket connections, preserving the order of tool search results.

๐Ÿ”— Release v0.120.0


GitHub Copilot โ€” three new features

Model selection for third-party agents

April 14 โ€” It is now possible to choose the model when launching a task with the Claude (Anthropic) and Codex (OpenAI) agents on github.com.

AgentAvailable models
ClaudeClaude Sonnet 4.6, Claude Opus 4.6, Claude Sonnet 4.5, Claude Opus 4.5
CodexGPT-5.2-Codex, GPT-5.3-Codex, GPT-5.4

Included with the existing Copilot subscription (Business or Enterprise), but the administrator must enable the corresponding policies at the company or organization level.

๐Ÿ”— Model selection changelog

Three-click merge conflict resolution

April 13 โ€” A new โ€œFix with Copilotโ€ button appears on pull requests with merge conflicts. In three clicks, the Copilot cloud agent resolves the conflicts, verifies that the build and tests pass, then pushes from its isolated cloud environment. The @copilot mention in PRs also makes it possible to fix failing GitHub Actions workflows or address code review comments. Available on all paid Copilot plans.

๐Ÿ”— Merge conflicts changelog

US/EU data residency and FedRAMP compliance

April 13 โ€” GitHub Copilot now supports data residency for the US and EU regions: all associated inference and data remain in the designated geographic area. US government customers also benefit from FedRAMP Moderate compliance. Data-resident requests carry a 10% premium request multiplier surcharge. Gemini models are not yet supported (GCP does not yet offer inference endpoints with data residency). Japan and Australia are on the roadmap for 2026.

๐Ÿ”— Data residency changelog


Generative media โ€” Runway, Luma, MiniMax, ElevenLabs

Runway Characters in video calls

April 14 โ€” Runway rolls out an update to Characters allowing your AI avatar to join a Zoom, Google Meet, or Teams video call. The process: choose or create a Character โ†’ paste the meeting link โ†’ click โ€œJoin Meetingโ€. The feature, initially available as an API for developers since March 9, is now accessible to all users from the Runway app.

๐Ÿ”— Runway tweet

Luma โ€” voice dictation and logo animation

April 14 โ€” Luma Labs launches two new features: voice dictation in its app (the user speaks, the description is converted into a generation prompt) and cinematic logo animation (upload your logo, and the agent produces an animated branding-oriented intro).

๐Ÿ”— Voice dictation tweet ยท Logo animation tweet

MiniMax โ€” three open source Music Skills for agents

April 14 โ€” MiniMax opens three Music Skills for agents as open source: minimax-music-gen (generation of a full track from a prompt, with automatic choice between original, instrumental, and cover), buddy-sings (the AI agent sings as a vocal companion), and Playlist curation (playlist curation from the userโ€™s library). These components are intended for integration into M2.7 agents.

๐Ÿ”— MiniMax tweet

ElevenLabs โ€” $100 million in net recurring revenue in Q1 2026

April 13 โ€” CEO Mati Staniszewski announces that ElevenLabs added more than $100 million in net annual recurring revenue in Q1 2026 โ€” their best quarter to date. Growth driven by enterprise partnerships (Klarna, Revolut, Deutsche Telekom, Toyota).

๐Ÿ”— ElevenLabs CEO tweet


What this means

Routines in Claude Code represent a paradigm shift: the development tool no longer only responds to interactive requests, it can now take planned or reactive initiatives within a projectโ€™s infrastructure. The combination of scheduled + webhook turns Claude Code into a permanent agent on a repository, with minimal setup cost.

On the open source front, GLM-5.1 confirms that Chinese agentic models have reached the level of the best proprietary models on coding benchmarks. The ability to sustain an 8-hour horizon of autonomous work โ€” with thousands of tool calls โ€” opens up concrete possibilities for intensive optimization tasks that traditional models cannot handle in a single session.

Gemini Robotics-ER 1.6 illustrates a different trend: general-purpose AI models adapted to the physical constraints of the real world, with software and hardware collaboration (Boston Dynamics/Spot) producing new capabilities such as reading industrial instruments.


Sources

This document was translated from the fr version into en using the gpt-5.4-mini model. For more information about the translation process, visit https://gitlab.com/jls42/ai-powered-markdown-translator