On April 13, 2026, GitHub launches the remote control (remote control) feature for Copilot CLI sessions, making it possible to drive a terminal from the web or a phone via a simple QR code. MiniMax releases M2.7, an agent model available on ModelScope with a cloud ecosystem operational from day one. Alibaba makes the Qwen3.5-Omni API available to developers worldwide, and Google DeepMind announces that Gemini 3.1 Flash Live (Thinking) now holds the top spot in the ฯ-Voice ranking for voice agents.
GitHub Copilot CLI โ Remote control from web and mobile
April 13 โ GitHub launches copilot --remote in public preview: an active Copilot CLI session can now be monitored and controlled from GitHub.com or from the GitHub Mobile app, with no direct access to the machine.
The process is simple: when a remote session starts, the CLI displays a link and a QR code. By opening that link from a browser or a phone, the user accesses the interface for the current session. Synchronization is bidirectional โ actions performed on the web or mobile are reflected in the terminal, and vice versa.
| Feature | Detail |
|---|---|
| Start | copilot --remote or /remote in an existing session |
| Access | Link + QR code displayed by the CLI |
| Apps | GitHub.com + GitHub Mobile (iOS TestFlight, Android Google Play beta) |
| Sync | Real-time bidirectional |
| Privacy | Private session, visible only to the user who started it |
| Session keep-alive | /keep-alive command to prevent sleep during long tasks |
All the usual CLI features remain available remotely: in-session steering, plan review and editing, mode switching (plan / interactive / autopilot), permission approval or denial, answers to ask_user questions.
Note for enterprises: Copilot Business or Enterprise users need an administrator to enable remote control and CLI policies before use.
๐ GitHub Changelog announcement
MiniMax M2.7 โ Open-source agent model with day-0 cloud ecosystem
April 12 โ MiniMax releases M2.7, an agent-architecture LLM available on ModelScope, with a vLLM integration operational from day one.
Published performance places M2.7 on par with the best coding models available:
| Benchmark | M2.7 Score |
|---|---|
| SWE-Pro | 56,22% (tied with GPT-5.3-Codex) |
| Terminal Bench 2 | 57,0% |
The model is designed for multi-agent orchestration (Agent Teams), advanced coding, and command-line task automation. It is immediately accessible via Together AI (serverless and dedicated) and Fireworks AI.
Note: MiniMax clarified after publication that M2.7 is not strictly open-source in the licensing sense โ the model was republished with modified terms of use.
๐ Open-source announcement on ModelScope ๐ Day-0 vLLM support ๐ Together AI availability
Qwen3.5-Omni API โ International availability
April 13 โ Tongyi Lab (Alibaba) announces the international availability of the Qwen3.5-Omni API via Alibaba Cloud Model Studio. The qwen3.5-omni-plus model is immediately accessible with an API key.
Presented in a research paper on March 29, 2026, Qwen3.5-Omni is a native omnimodal model: it processes text, images, audio, and video in a single inference, without a multi-step pipeline. It has two operating modes โ Thinker (reasoning) and Talker (voice conversation) โ through a hybrid architecture.
โNow our Qwen3.5-Omni API is officially live, and itโs ready to transform how you process video content.โ โ @Ali_TongyiLab on X
๐ Announcement thread ๐ Alibaba Cloud Model Studio
Gemini 3.1 Flash Live (Thinking) โ No. 1 in the ฯ-Voice ranking
April 13 โ Tulsee Doshi (Product Manager, Google DeepMind) announces that Gemini 3.1 Flash Live with Thinking mode enabled has taken first place in Sierra Platformโs ฯ-Voice Leaderboard.
This ranking measures model performance for building real-time voice agents: speech understanding, multi-turn reasoning, and action execution in production-like scenarios. Gemini 3.1 Flash Live had been launched on March 26, 2026; this result validates its capabilities for developers building voice applications.
The model is available through the Gemini Live API in Google AI Studio.
๐ Announcement on X ๐ ฯ-Voice Leaderboard
TurboTax and Aiwyn Tax connectors for Claude
April 12 โ Henry Shi (Anthropic) announces two new connectors for Claude: TurboTax and Aiwyn Tax (formerly Column Tax), launched just days before the April 15 U.S. tax filing deadline.
Once connected, Claude can estimate a refund or amount due, explain tax forms, and guide the user through the filing process. These connectors are aimed at U.S. users with a Claude subscription.
๐ Henry Shi announcement on X
What it means
GitHubโs copilot --remote feature is the most consequential of the day: it opens up a new usage pattern for long-running CLI tasks โ start a session from a workstation, then monitor or drive it from any device. It is a direct answer to autonomous agent use cases that run for hours.
On the model side, MiniMax M2.7 and Qwen3.5-Omni illustrate two different dynamics: M2.7 targets developers deploying coding agents (with day-one vLLM integration, unlike most models that arrive late to that ecosystem); Qwen3.5-Omni focuses on native multimodality, with video as the central argument.
Gemini 3.1 Flash Liveโs result on the ฯ-Voice Leaderboard confirms that Google is investing seriously in the production voice-agent segment โ a market that is still understructured but growing.
Sources
- GitHub Changelog โ Copilot CLI remote control (April 13)
- MiniMax M2.7 โ open-source ModelScope
- MiniMax M2.7 โ vLLM support
- MiniMax M2.7 โ Together AI
- MiniMax M2.7 โ Fireworks AI
- Qwen3.5-Omni API available โ main tweet
- Qwen3.5-Omni โ Alibaba Cloud Model Studio
- Gemini 3.1 Flash Live Thinking โ ฯ-Voice #1
- ฯ-Voice Leaderboard โ Sierra Platform
- TurboTax/Aiwyn Tax connectors for Claude
This document was translated from the fr version into the en language using the gpt-5.4-mini model. For more information about the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator