Search

Claude Code NO_FLICKER and v2.1.89, Copilot /fleet, Perplexity Computer in Slack

Claude Code NO_FLICKER and v2.1.89, Copilot /fleet, Perplexity Computer in Slack

April 1, 2026 is unusually packed with solid announcements: Claude Code v2.1.89 brings a new no-flicker rendering engine and Computer Use in the terminal, Copilot CLI launches /fleet for parallel agent orchestration, and Perplexity Computer lands in Slack claiming $776 million of delegated work. As a bonus, two well-executed April Fools’ jokes deserve their own section.


Claude Code v2.1.89: NO_FLICKER and new permissions

April 1, 2026 — Version 2.1.89 of Claude Code introduces the NO_FLICKER mode, an experimental rendering engine for the terminal that fixes a long-standing issue reported by the community.

NO_FLICKER mode

Enabled via the environment variable CLAUDE_CODE_NO_FLICKER=1, this renderer virtualizes the entire viewport and hooks keyboard and mouse events directly at the application level. Result: no more flicker or uncontrolled scrolling, and support for mouse events in the terminal. The majority of Anthropic’s internal users prefer this mode over the classic renderer. It remains experimental, with some technical trade-offs related to terminal architecture.

To enable it:

CLAUDE_CODE_NO_FLICKER=1 claude

New hooks and permissions capabilities

Version 2.1.89 also brings several enhancements for automated workflows:

FeatureDescription
Permission "defer" (hooks PreToolUse)Headless sessions can pause on a tool call and resume with -p --resume
Hook PermissionDeniedTriggers after a classifier refusal in auto mode; {retry: true} retries the action
MCP_CONNECTION_NONBLOCKING=trueFor -p mode: ignores MCP connection wait (bounded to 5 seconds)
Named sub-agentsVisible in mention autocomplete @
Rejected commands (auto mode)Show a notification + appear in /permissions → Recent tab, re-runnable with r

Fixes include: resolving symlinks in permission rules, handling CRLF files on Windows, fixing the StructuredOutput schema cache, and eliminating a memory leak in long sessions.

Ship ship ship

🇫🇷 We ship, we ship, we ship@bcherny on X

🔗 Announcement @bcherny 🔗 Fullscreen rendering documentation


Computer Use in Claude Code CLI

March 30, 2026 — Claude Code now integrates Computer Use directly into the terminal. Claude can open applications, navigate the graphical interface, and test what it just built without leaving the CLI.

The feature is available as a research preview on Pro and Max plans (macOS only). It is activated with the command /mcp in a Claude Code session. It supports anything that can be opened on a Mac: compiled SwiftUI apps, local Electron builds, browsers, and more.

The goal is to close the loop between code generation and visual validation — Claude writes, then verifies the result itself in the interface.

The announcement tweet has surpassed 15.4 million views.

🔗 Announcement @claudeai


Claude Code — GitHub connection via /web-setup

March 31, 2026 — A new command /web-setup lets you link your local GitHub account to Claude Code web (claude.ai/code) without manual configuration. The command, run in a local claude session, uses existing GitHub credentials to automatically set up the connection.

🔗 claude.ai/code


GitHub Copilot CLI: /fleet for parallel agents

April 1, 2026 — GitHub enhances Copilot CLI with the /fleet command, which lets you orchestrate multiple AI agents simultaneously instead of processing tasks sequentially.

An orchestrator decomposes the goal into work items with their dependencies, then dispatches independent tasks to multiple sub-agents running in parallel. Each sub-agent has its own context but shares the filesystem. The orchestrator monitors progress, dispatches subsequent waves, and synthesizes the final results.

Use cases:

  • Refactoring multiple files at once
  • Generating documentation across multiple components
  • Implementing a feature covering API, UI, and tests in a single pass

🔗 GitHub Blog: Run multiple agents at once with /fleet


GitHub Copilot cloud agent: research, plan and code

April 1, 2026 — The Copilot cloud agent (formerly “Copilot coding agent”) expands far beyond simple pull request generation. Three new modes are available:

ModeDescription
PR flow controlThe agent generates code on a branch without automatically creating a PR — the developer reviews diffs before deciding
Implementation plansBy adding “Ask for a plan” to the prompt, the agent details its approach before writing any code
Deep researchThe agent investigates an entire repository to answer questions grounded in the project’s context

These features are available via the repository’s Agents tab and Copilot Chat, on all paid Copilot plans. Business and Enterprise accounts require admin activation.

🔗 GitHub Changelog: Research, plan and code with Copilot cloud agent


Perplexity Computer in Slack

April 1, 2026 — Perplexity publicly ships Computer in Slack after several weeks of internal-only use. Computer is an AI orchestrator that manages a team of agents to execute complex tasks directly in Slack channels and DMs.

Numbers claimed

In four weeks of internal use across 300 employees, Perplexity claims 1.6millionofworkcompleted.SinceopeningtoMaxsubscribers,McKinsey,Harvard,MIT,BCGandNielsenbenchmarkstotalacumulative1.6 million of work completed. Since opening to Max subscribers, McKinsey, Harvard, MIT, BCG and Nielsen benchmarks total a cumulative **776 million of equivalent work** performed for Enterprise, Pro and Max subscribers.

How it works

Mention @Computer in a channel or send a direct message. The agent can perform deep web research, analyze data, generate content, and access more than 400 connectors (GoHighLevel, Snowflake, and others). It also orchestrates Claude Code and GPT Codex for coding tasks.

A new MCP connector gives Computer extended access to Slack context: workspace search, thread reading, sending messages. Tasks started in Slack can continue on the Perplexity web platform without losing context.

Availability: Max, Pro and Enterprise subscribers via the Slack Marketplace.

🔗 Perplexity Blog: Computer in Slack


Codex CLI 0.118.0

March 31, 2026 — Version 0.118.0 of Codex CLI introduces four new features and several fixes.

NewDescription
Windows network sandboxApplies OS-level egress rules without environment variables
Device code flow (app-server)Reliable ChatGPT sign-in when browser redirect is not available
codex exec + stdinAccepts stdin input and a command-line prompt simultaneously
Dynamic bearer tokensCustom model providers can refresh short-lived tokens without static credentials

Fixes include: protecting local .codex files at creation, more reliable Linux sandbox launch, restoration of multiple TUI workflows in app-server mode (/copy, /resume, /agent), more robust MCP startup, and fixing a Windows permission issue when applying patches.

npm install -g @openai/codex@0.118.0

🔗 Codex CLI Changelog


Z.ai: GLM-5V-Turbo, vision and code

April 1, 2026 — Z.ai (the lab behind the GLM models) announces GLM-5V-Turbo, a Vision Coding Model that natively understands multimodal inputs to generate and edit code.

Unlike classic code models, GLM-5V-Turbo directly processes images, videos, design mockups and documents to produce corresponding code. Z.ai claims top results on multimodal coding benchmarks, tool use, and GUI agents.

The model is explicitly designed to integrate into agentic workflows with Claude Code and OpenClaw, rather than as a standalone model. It is available immediately via chat.z.ai and the documented API. An early access “Coding Plan” program is open via a form.

🔗 Announcement @Zai_org


GrandCode/Qwen: three consecutive wins on Codeforces

April 1, 2026 — The GrandCode team announces that its agentic AI system, powered by Qwen, placed first in the last three live Codeforces competitions (Rounds 1087, 1088 and 1089), outperforming all human participants, including top world competitors.

Codeforces is one of the reference platforms for competitive programming. Winning three consecutive rounds against the world elite is a first for an AI system. Alibaba Qwen calls the event a “watershed moment for coding intelligence.”

Huge congratulations to the @GrandCode team — powered by Qwen — for winning 3 consecutive Codeforces rounds (Round 1087, 1088, and 1089), surpassing all human participants.

🇫🇷 Our congratulations to the @GrandCode team — powered by Qwen — for winning 3 consecutive Codeforces rounds (Rounds 1087, 1088 and 1089), surpassing all human participants.@Alibaba_Qwen on X

🔗 Announcement @Alibaba_Qwen


Runway Builders and Characters API

March 31, 2026 — Runway launches Runway Builders, a program for Seed to Series C startups building products based on generative video and real-time conversational AI.

The launch includes Runway Characters, a real-time video agent API powered by GWM-1 (Runway’s General World Model). Participating startups receive up to 500,000 API credits, priority access to higher rate limits, a private Slack community and direct support.

The founding cohort includes Cartesia, MSCHF, Oasys Health, Spara, Subject and Supersonik, active in customer support, synthetic media and interactive AI. The Characters API is also available in the iOS app since March 31.

🔗 Runway Blog: Introducing Runway Builders 🔗 Announcement @runwayml


Anthropic × Australia: MOU and AUD 3 million in API credits

March 31, 2026 — Anthropic signed a Memorandum of Understanding with the Australian government. CEO Dario Amodei traveled to Canberra to formalize the agreement with Prime Minister Anthony Albanese.

The MOU establishes collaboration with the Australian AI Safety Institute: sharing data on model capabilities, joint safety evaluations, and access to the Anthropic Economic Index data to track AI adoption in the economy.

Anthropic is investing AUD 3 million in API credits for four research institutions:

InstitutionArea
Australian National University (ANU)Genetic sequencing for rare diseases
Murdoch Children’s Research InstitutePediatric medicine and stem cells
Garvan Institute of Medical ResearchClinical genomics, rare disease diagnostics
Curtin Institute for Data ScienceMultidisciplinary research (health, law, engineering)

A deep tech startup program offers up to $50,000 in API credits (drug discovery, materials science, climate modeling). The announcement also foreshadows the opening of a Sydney office, Anthropic’s fourth office in the Asia-Pacific region.

🔗 Official Anthropic announcement


In brief

GitHub Mobile — Redesigned Copilot tab (April 1) — The GitHub mobile app (iOS and Android) gets a redesigned Copilot tab: direct access to sessions and history, native session logs viewable without the web browser, filterable sessions by state, and full controls (create PRs, review results, stop an active session).

🔗 GitHub Mobile Changelog

GitHub Mobile — Assign an agent from issues (April 1) — An “Assign an Agent” option appears in the issues menu on iOS and Android, with the ability to add custom instructions and select a different repository for delegation.

🔗 GitHub Mobile Changelog

Deprecation of Claude Sonnet 4 in Copilot (March 31) — GitHub Copilot announces the deprecation of Claude Sonnet 4 on May 1, 2026, replaced by Claude Sonnet 4.6 across all Copilot experiences (chat, inline edits, ask/agent modes, code completions). Enterprise admins must update their model policies before that date.

🔗 GitHub Copilot Changelog

GPT-5.4 mini in Copilot Student (April 1) — GPT-5.4 mini is now available via Auto model selection in Copilot Chat for Copilot Student users, on VS Code, Visual Studio, JetBrains, Xcode and Eclipse.

🔗 GitHub Copilot Changelog NotebookLM × Royal Society: Benjamin Franklin notebook (March 31) — NotebookLM releases a featured notebook in partnership with Google Arts & Culture and the Royal Society, devoted to Benjamin Franklin. It lets you interact with his work, his personal and professional relationships, and build a new perspective on this polymath.

🔗 @NotebookLM on X

Google AI roundup — March 2026 (April 1) — Google publishes its monthly roundup which highlights, among announcements not yet covered here: Ask Maps (Q&A mode for Google Maps), vibe coding in Google AI Studio with the Antigravity agent, and the extension of Personal Intelligence to AI Mode, Chrome and the Gemini app.

🔗 Google AI Blog — March 2026


April Fools’

Claude Code /buddy — the official easter egg

Claude Code version 2.1.89 ships an official easter egg: the command /buddy hatches a small creature that settles in the terminal and watches you code. The CHANGELOG is explicit: “is here for April 1st”. It’s not a joke about the joke — the feature actually works in the released version. An easter egg shipped to production on April 1, with a note in the official changelog. Hats off.

Suno Keyboard v1 — the fictional musical keyboard

Suno posted a tweet on April 1 announcing “Suno Keyboard v1”, presented as a “new feature” accompanied by a demo video. With 15,600 views and mostly sarcastic replies, the product is clearly fictional — a physical musical keyboard for an AI music-generation platform. Amusing, but not very subtle.

🔗 Tweet @suno


What this means

April 1, 2026 feels less like an April Fools’ day and more like an intense delivery day. On the developer tooling side, three signals converge: Claude Code closes the loop between generation and visual validation with Computer Use in the terminal, Copilot CLI introduces parallel orchestration of agents with /fleet, and Perplexity Computer shows that an agent orchestrator can integrate directly into existing team communication tools (Slack). These three products answer the same question: how to reduce the time between the developer’s intention and the verified result.

GrandCode/Qwen’s victory across three consecutive Codeforces rounds is a concrete indicator that agentic systems are reaching a level of performance in competitive programming that even the best humans can no longer consistently match. This is no longer a lab benchmark.

On Anthropic’s side, the Australian MOU confirms the strategy of building institutional relationships with governments, alongside commercial deployments.


Sources

This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, consult https://gitlab.com/jls42/ai-powered-markdown-translator