Search

Claude Code /loop schedules your tasks, Copilot VS Code v1.110 deploys agent plugins, Pika launches AI Selves

Claude Code /loop schedules your tasks, Copilot VS Code v1.110 deploys agent plugins, Pika launches AI Selves

The AI news for this period is marked by three main themes. On the developer tools side, Claude Code v2.1.71 introduces the /loop command to schedule recurring tasks for up to 3 days, accompanied by more than 40 fixes. GitHub Copilot VS Code v1.110 rolls out a major update with agent plugins, browser agent tools and shared memory. In generative media, Pika launches AI Selves (video digital twins) and Luma unveils its multimodal creative agents powered by Unified Intelligence.


Claude Code v2.1.71 — The /loop command and 40+ fixes

March 7 — Boris Cherny, head of Claude Code, announces the release of the /loop command, a major feature in Claude Code v2.1.71. /loop is a built-in skill that lets you schedule recurring tasks for up to 3 days. The command accepts an interval (for example 5m) and a prompt describing the task to execute, relying on the cron scheduling tools added in this release.

Concrete usage examples:

CommandAction
/loop babysit all my PRsMonitor PRs, fix broken builds and reply to comments via a worktree agent
/loop every morning use the Slack MCPGenerate a morning summary of Slack messages
/loop 5m check the deployCheck deployment status every 5 minutes

The announcement generated massive engagement: 1.8 million views, 12,000 likes and 8,000 bookmarks in less than 24 hours.

In parallel, the release includes more than 40 bug fixes:

CategoryFixes
Stabilitystdin freezing in long sessions, 5–8 second startup freeze related to voice mode (CoreAudio), UI freeze at startup with claude.ai connectors (OAuth refresh)
Bugs fixedForked conversations (/fork) sharing the same plan file, detection of the Chrome extension stuck on “not installed”, clipboard corrupting non-ASCII text (CJK, emoji) on Windows/WSL
Performance74% reduction in prompt input re-renders, 426 KB less memory at startup, deferred loading of the native image processor
ImprovementsBridge reconnection after sleep (seconds instead of minutes), spark icon in the VS Code activity bar, full markdown view for plans, native MCP server management dialog in VS Code

Released today: /loop /loop is a powerful new way to schedule recurring tasks, for up to 3 days at a time

Published today: /loop. /loop is a powerful new way to schedule recurring tasks, for up to 3 days. — @bcherny on X

🔗 Announcement /loop 🔗 Claude Code Changelog v2.1.71


Local scheduled tasks in Claude Code Desktop

March 6 — Thariq (Anthropic team) announces the launch of local scheduled tasks in Claude Code Desktop. This feature lets you create schedules for tasks directly from the desktop application’s graphical interface — the visual counterpart to the /loop command available in the CLI.

The tweet, reposted by Boris Cherny, received 3.5 million views and 13,000 likes. Both features (Desktop and CLI) aim for the same goal — automating recurring tasks — but offer two complementary interfaces: a graphical one for Desktop, and a command-line one for /loop.

🔗 Announcement local scheduled tasks for Desktop


GitHub Copilot VS Code v1.110 — Agent plugins, browser tools and shared memory

March 6 — GitHub publishes the February release of Copilot for VS Code (v1.110), a massive update that transforms Copilot into an extensible agent platform. The new features fall into three areas.

Programming agents

FeatureDescription
HooksRun code at key events in the agent lifecycle (policies, auto-lint, command blocking)
Conversation forkBranch from a checkpoint to explore an alternative
Auto-approveToggle /autoApprove or /yolo with terminal sandboxing
QueueSend messages while the agent is working

Extending agents

FeatureDescription
Agent plugins (experimental)Pre-packaged bundles of skills, tools, hooks and MCP servers
Browser agent tools (experimental)Browse, click, take screenshots, verify changes
Built-in Copilot CLIDiff tabs, trusted folder sync, snippet sending
Creating customizations/create-* for prompts, skills, agents, hooks

Context for agents

FeatureDescription
Shared memoryBetween Copilot coding agent, Copilot CLI and code review
Persistent plansSurvive compaction and runs
Explore sub-agentParallelized search of the codebase by lightweight models
Context compactionAutomatic + manual /compact
Handling large outputsWrite to disk instead of the context

VS Code productivity

The Kitty graphics protocol arrives for high-fidelity images in the terminal. The model picker is redesigned with search and sections, and AI coauthor attribution is added for commits.

This release marks a significant shift: Copilot no longer just completes code, it becomes an extensible agent platform via plugins, with web navigation capabilities and shared memory between CLI, IDE and code review.

🔗 Copilot VS Code v1.110 (February release)


Figma MCP Server — Generating design layers from VS Code

March 6 — GitHub Copilot users can now connect the Figma MCP server to establish a two-way bridge between design and code. The workflow is complete: pull design context into code, generate code from a design, send the rendered UI back to Figma as editable frames, then pull updates back in to iterate.

UI capture to Figma requires the remote MCP server. The feature is available on all Figma plans, in VS Code for now, with Copilot CLI support coming soon.

🔗 Figma MCP Server in VS Code


Pika AI Selves — Video digital twins with persistent memory

February 20 — Pika Labs launches AI Selves, a new product category: video digital twins with persistent memory. The user uploads reference material, and Pika creates a persistent AI video avatar that moves, speaks and reacts like the user.

AI Selves integrate with Pika’s existing text-to-video and motion control pipeline. The avatar can appear in cinematic scenes, product demos, explainer videos and social content. A distinguishing point: AI Selves have persistent memory and adapt to the user’s personality and communication style.

The launch campaign was notable for its offbeat approach: a retro-futuristic infomercial accompanied by a “Twitter storm” where Pika employees’ AI Selves tweeted autonomously.

🔗 Pika AI Selves


Luma Agents — Unified intelligence for multimodal creation

March 5 — Luma Labs repositions itself with the launch of Luma Agents, multimodal creative agents capable of working on text, image, video and audio. The architecture is based on Unified Intelligence and its model Uni-1, which tightly couples reasoning and rendering rather than separating thinking from creation.

The agents coordinate multiple specialized AI models:

ModelDomain
Ray 3.14 (Luma)Video
Veo 3 (Google)Video
Nano Banana ProImage
Seedream (ByteDance)Image
ElevenLabsVoice

The initial brief context is preserved until final delivery, which differentiates Luma’s approach from the usual fragmented workflows. Early enterprise partners are Publicis Groupe and Serviceplan Group. Access is available via API, with a phased rollout.

This launch marks a major repositioning for Luma: from “Dream Machine” (video generation) to a platform of integrated creative agents.

🔗 Luma Agents — TechCrunch 🔗 Luma Agents announcement


Stability AI x WPP — Strategic partnership

March 5 — Stability AI announces a strategic partnership and investment from WPP, the global communications group. The stated goal is to inaugurate a new era of innovation at the intersection of creativity and technology, with a focus on the media and entertainment sector.

🔗 Stability AI News


What this means

The period from March 5 to 8 highlights two structuring trends.

Automation goes beyond single runs. Claude Code /loop and Desktop scheduled tasks introduce recurring tasks into AI developer tools — monitor PRs for 3 days, generate daily summaries, check deployments every 5 minutes. At the same time, Copilot VS Code v1.110 crosses a threshold with agent plugins, browser agent tools and shared memory between CLI, IDE and code review. Code assistants no longer just answer a request: they can now schedule, iterate and remember context across sessions.

Generative media moves to agents and avatars. Pika AI Selves introduces video digital twins with persistent memory, while Luma shifts from video generation to multimodal creative agents with Uni-1. The common thread is persistence of identity and context. These tools no longer generate isolated clips — they maintain coherence across creations. WPP’s investment in Stability AI confirms the advertising industry’s interest in these technologies.


Sources

This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator