Search

Copilot CLI Remote Control, MiniMax M2.7, Qwen3.5-Omni API

Copilot CLI Remote Control, MiniMax M2.7, Qwen3.5-Omni API

On April 13, 2026, GitHub launches the remote control (remote control) feature for Copilot CLI sessions, making it possible to drive a terminal from the web or a phone via a simple QR code. MiniMax releases M2.7, an agent model available on ModelScope with a cloud ecosystem operational from day one. Alibaba makes the Qwen3.5-Omni API available to developers worldwide, and Google DeepMind announces that Gemini 3.1 Flash Live (Thinking) now holds the top spot in the ฯ„-Voice ranking for voice agents.


GitHub Copilot CLI โ€” Remote control from web and mobile

April 13 โ€” GitHub launches copilot --remote in public preview: an active Copilot CLI session can now be monitored and controlled from GitHub.com or from the GitHub Mobile app, with no direct access to the machine.

The process is simple: when a remote session starts, the CLI displays a link and a QR code. By opening that link from a browser or a phone, the user accesses the interface for the current session. Synchronization is bidirectional โ€” actions performed on the web or mobile are reflected in the terminal, and vice versa.

FeatureDetail
Startcopilot --remote or /remote in an existing session
AccessLink + QR code displayed by the CLI
AppsGitHub.com + GitHub Mobile (iOS TestFlight, Android Google Play beta)
SyncReal-time bidirectional
PrivacyPrivate session, visible only to the user who started it
Session keep-alive/keep-alive command to prevent sleep during long tasks

All the usual CLI features remain available remotely: in-session steering, plan review and editing, mode switching (plan / interactive / autopilot), permission approval or denial, answers to ask_user questions.

Note for enterprises: Copilot Business or Enterprise users need an administrator to enable remote control and CLI policies before use.

๐Ÿ”— GitHub Changelog announcement


MiniMax M2.7 โ€” Open-source agent model with day-0 cloud ecosystem

April 12 โ€” MiniMax releases M2.7, an agent-architecture LLM available on ModelScope, with a vLLM integration operational from day one.

Published performance places M2.7 on par with the best coding models available:

BenchmarkM2.7 Score
SWE-Pro56,22% (tied with GPT-5.3-Codex)
Terminal Bench 257,0%

The model is designed for multi-agent orchestration (Agent Teams), advanced coding, and command-line task automation. It is immediately accessible via Together AI (serverless and dedicated) and Fireworks AI.

Note: MiniMax clarified after publication that M2.7 is not strictly open-source in the licensing sense โ€” the model was republished with modified terms of use.

๐Ÿ”— Open-source announcement on ModelScope ๐Ÿ”— Day-0 vLLM support ๐Ÿ”— Together AI availability


Qwen3.5-Omni API โ€” International availability

April 13 โ€” Tongyi Lab (Alibaba) announces the international availability of the Qwen3.5-Omni API via Alibaba Cloud Model Studio. The qwen3.5-omni-plus model is immediately accessible with an API key.

Presented in a research paper on March 29, 2026, Qwen3.5-Omni is a native omnimodal model: it processes text, images, audio, and video in a single inference, without a multi-step pipeline. It has two operating modes โ€” Thinker (reasoning) and Talker (voice conversation) โ€” through a hybrid architecture.

โ€œNow our Qwen3.5-Omni API is officially live, and itโ€™s ready to transform how you process video content.โ€ โ€” @Ali_TongyiLab on X

๐Ÿ”— Announcement thread ๐Ÿ”— Alibaba Cloud Model Studio


Gemini 3.1 Flash Live (Thinking) โ€” No. 1 in the ฯ„-Voice ranking

April 13 โ€” Tulsee Doshi (Product Manager, Google DeepMind) announces that Gemini 3.1 Flash Live with Thinking mode enabled has taken first place in Sierra Platformโ€™s ฯ„-Voice Leaderboard.

This ranking measures model performance for building real-time voice agents: speech understanding, multi-turn reasoning, and action execution in production-like scenarios. Gemini 3.1 Flash Live had been launched on March 26, 2026; this result validates its capabilities for developers building voice applications.

The model is available through the Gemini Live API in Google AI Studio.

๐Ÿ”— Announcement on X ๐Ÿ”— ฯ„-Voice Leaderboard


TurboTax and Aiwyn Tax connectors for Claude

April 12 โ€” Henry Shi (Anthropic) announces two new connectors for Claude: TurboTax and Aiwyn Tax (formerly Column Tax), launched just days before the April 15 U.S. tax filing deadline.

Once connected, Claude can estimate a refund or amount due, explain tax forms, and guide the user through the filing process. These connectors are aimed at U.S. users with a Claude subscription.

๐Ÿ”— Henry Shi announcement on X


What it means

GitHubโ€™s copilot --remote feature is the most consequential of the day: it opens up a new usage pattern for long-running CLI tasks โ€” start a session from a workstation, then monitor or drive it from any device. It is a direct answer to autonomous agent use cases that run for hours.

On the model side, MiniMax M2.7 and Qwen3.5-Omni illustrate two different dynamics: M2.7 targets developers deploying coding agents (with day-one vLLM integration, unlike most models that arrive late to that ecosystem); Qwen3.5-Omni focuses on native multimodality, with video as the central argument.

Gemini 3.1 Flash Liveโ€™s result on the ฯ„-Voice Leaderboard confirms that Google is investing seriously in the production voice-agent segment โ€” a market that is still understructured but growing.


Sources

This document was translated from the fr version into the en language using the gpt-5.4-mini model. For more information about the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator