Search

Grok banned in the Netherlands, Gemini 3.1 Flash Live, native Codex plugins

The Amsterdam district court has banned xAI from generating non-consensual sexual images with Grok, under penalty of €100,000 per day. On the same day, Google launched Gemini 3.1 Flash Live for real-time audio conversations in over 90 languages, OpenAI added native plugins to Codex (Slack, Figma, Notion, Gmail), and Anthropic published a technical post about the design of Claude Code’s auto mode. Cohere, Mistral and Suno rounded off a busy week of announcements.


xAI: Amsterdam court bans Grok nudes

March 27 — The Amsterdam district court issued a ruling against xAI, prohibiting it from generating or distributing non-consensual sexual images in the Netherlands. In case of non-compliance, the fine is €100,000 per day, capped at €10 million.

The decision follows a joint complaint from Dutch NGO Offlimits and the Victims Support Fund. According to the Center for Countering Digital Hate (CCDH), Grok generated 3 million sexualized images in 11 days, including 23,000 involving minors. Offlimits points out that Grok does not consider the geographical location of the depicted victim, which gives it a global reach.

The ruling comes the same day the European Parliament voted to ban AI-generated sexual deepfakes — a strong sign of regulatory alignment. This is the first European judgment of this kind directly against xAI.

🔗 CNBC: Dutch court bans Grok AI nudes 🔗 The Record Media: Dutch court threatens xAI with fines


Gemini 3.1 Flash Live: real-time audio in 90+ languages

March 26 — Google released Gemini 3.1 Flash Live, its multimodal model for real-time conversations. It supports audio, images, video and text with a context window of 128,000 tokens and supports more than 90 languages.

Compared to the previous generation, the model maintains conversations twice as long, handles background noise and environmental sounds better, follows complex system instructions more accurately, and triggers external tools more reliably during conversations. All generated audios are labeled with SynthID.

FeatureDetail
Context window128,000 tokens
Languages90+
Conversations2× longer than before
WatermarkingSynthID on all audios

Availability: via the Live API in Google AI Studio (developers), via Gemini Live and Search Live (users, 200+ countries), and via Vertex AI (enterprises). Search Live also expands access to more than 200 countries and territories, with Google Lens video support.

🔗 Official Google announcement


Gemini: import memories and histories from ChatGPT and Claude

March 26 — Google launched an import tool in the Gemini app to facilitate migration from other AI assistants.

Two features are available:

  1. Import Memories (“Add Memory”): Gemini suggests a prompt to enter into ChatGPT, Claude or Copilot. The generated response (a summary of the personal data the other assistant has memorized) is copy-pasted into Gemini, which extracts preferences, interests, location info, etc.

  2. Import Chats (“Import Chats”): upload a .zip file exported from ChatGPT or Claude, up to 5 GB. Past conversations become searchable and can be resumed in Gemini.

🔗 Blog Google: Switch to Gemini


Gemini CLI v0.35.2: subagents by default, improved Vim — and Pro access restricted

March 25-26 — The stable v0.35.2 release of the Gemini CLI introduces several notable features.

FeatureDescription
Subagents enabled by defaultParallel task scheduler + code chunking
Customizable keybindingsSupport keybindings, literal characters, Kitty protocol
Improved Vim modeMotions X, ~, r, f/F/t/T; copy-paste with unnamed register
Unified SandboxManagerTool isolation with bubblewrap/seccomp on Linux
JIT context discoveryOptimized loading for filesystem tools
Native gRPCNative integration and protocol routing

Notable policy change: as of March 25, free users only have access to Gemini Flash. Access to Gemini Pro is reserved for paying subscribers (Pro and Ultra plans). Community reaction has been mostly negative. Google is also strengthening abuse detection: using Gemini CLI OAuth authentication with third-party software can now lead to restrictions.

🔗 Gemini CLI changelog 🔗 Community discussion


Gemini Drop March 2026: free Personal Intelligence and Lyria 3 Pro

March 27 — The monthly “Gemini Drops” update for March 2026 presents the new features deployed this month in the Gemini app.

FeatureDescriptionAvailability
Free Personal IntelligenceConnect Gmail, Photos, YouTube to plan trips/projectsFree, United States
Improved Gemini LiveBased on Gemini 3.1 Flash Live, 2× longer contextAll users
Import ChatGPT/ClaudeTransfer memories and historyAll users
Lyria 3 ProCreate music up to 3 minutes, photos → themesSubscribers
Google TVGemini visual answers and narrationsGoogle TV

🔗 Gemini Drop March 2026


Codex: native plugins for Slack, Figma, Notion and Gmail

March 26-27 — OpenAI is rolling out plugins as a first-class feature in Codex. Developers can connect Codex to the tools they use daily without manual configuration.

PluginUse case
SlackSend messages, read channels
FigmaAccess designs, generate code
NotionRead and write pages
GmailRead and draft emails

Plugins sync automatically when Codex starts and are accessible via the /plugins command. Installation and uninstallation are done directly from the TUI with integrated authentication management.

On March 27, OpenAI published a gallery of one-click practical use cases: building iOS apps, dataset analysis, report and presentation generation. The Codex v0.117.0 changelog details technical features, including support for plugin mentions in prompts.

🔗 Codex plugins documentation 🔗 Tweet @OpenAIDevs


Anthropic: how Claude Code’s auto mode was designed

March 26 — Anthropic published a technical blog post explaining the design of its auto mode in Claude Code, on the Engineering blog.

The problem addressed: Claude Code users suffer from approval fatigue. Statistics show that 93% of permission requests are accepted — suggesting partial automation is possible without sacrificing safety.

The solution: two layers of classifiers

Auto mode relies on two protection levels based on models (Sonnet 4.6):

  1. Input layer: a prompt injection probe analyzes tool outputs before they reach the agent.
  2. Output layer: a transcription classifier evaluates each proposed action via a two-step process before execution.
LevelAction typeHandling
Tier 1Safe tools (file reading, search)Direct pass, no classifier
Tier 2Project file modificationsClassifier bypass (reviewable via version control)
Tier 3Dangerous actions (shell commands, external operations)Full evaluation by the classifier

Performance metrics:

StepFalse positive rate (real traffic)False negative rate (risky actions)
Step 1 (fast)8.5%6.6%
Full pipeline0.4%17%

A 0.4% false positive rate on the full pipeline means auto mode almost never interrupts work unnecessarily. The 17% false negative rate reflects cases where the classifier misses the exact scope of the user’s authorization.

“New on the Engineering Blog: How we designed Claude Code auto mode. Many Claude Code users let Claude work without permission prompts. Auto mode is a safer middle ground: we built and tested classifiers that make approval decisions instead.” — @AnthropicAI on X

🔗 Claude Code auto mode: a safer way to skip permissions


GitHub Copilot: merge conflict resolution and agent visibility

March 26 — Two major GitHub Copilot updates improve team workflows.

Merge conflict resolution: it is now possible to mention @copilot in a pull request comment to ask it to resolve conflicts. The agent works in its cloud environment, resolves conflicts, verifies that the build and tests pass, then pushes changes. Available on all paid Copilot plans (Business/Enterprise requires admin activation).

Agent visibility in Issues and Projects: when a code agent (Copilot, Claude, Codex) is assigned to an issue, its session appears under the assignee in the sidebar with real-time status — queued, running, awaiting review, completed. Sessions are also visible in Projects table and board views (enable via “View menu > Show agent sessions”).

🔗 Copilot resolves merge conflicts 🔗 Agent activity in Issues and Projects


GitHub: new PR dashboard and Copilot for Jira

March 25-26 — Two additional developer improvements on GitHub.

Pull Requests dashboard (public preview): a redesigned dashboard on github.com/pulls centralizes PRs needing attention — requested reviews, fixes required, ready to merge. Saved custom views, advanced filters with autocomplete and support for AND/OR queries complete the interface. Enable via Feature Preview settings.

Copilot for Jira: since the public preview launch, several enhancements: choose the AI model directly from Jira, automatically include the Jira ticket number in PR titles and branch names, and access Confluence pages via Atlassian’s MCP server (configurable with a PAT).

🔗 New PR dashboard 🔗 Copilot for Jira improvements


Cohere Transcribe: No. 1 in open-source ASR ranking

March 26 — Cohere launched Cohere Transcribe, its first Automatic Speech Recognition (ASR) model. Released open-source under the Apache 2.0 license, it immediately ranks first in the Hugging Face Open ASR leaderboard for English.

The model uses a Fast-Conformer Transformer encoder-decoder architecture with 2 billion parameters. Over 90% of parameters are allocated to the encoder, with a lightweight decoder to minimize autoregressive computation and maximize speed. Trained on 500,000 hours of audio–transcription pairs.

ModelAverage WER
Cohere Transcribe5.42
Zoom Scribe v15.47
IBM Granite 4.0 1B5.52
NVIDIA Canary Qwen 2.5B5.63
OpenAI Whisper Large v37.44

The WER (Word Error Rate) of 5.42% places Cohere Transcribe at the top. Inference speed is 3× faster than comparable-size competitors. 14 languages supported (including French, German, Arabic, Mandarin Chinese, Japanese and Korean).

Limitations to note: the model is not designed for code-switching (language changes within audio) and may transcribe non-speech sounds — adding a VAD (Voice Activity Detection) filter is recommended.

🔗 Cohere Blog: Transcribe 🔗 Model on Hugging Face


Suno v5.5: personal voice, custom models, adaptive preferences

March 26 — Suno released version 5.5 of its music generation tool, with three new personalization-focused features.

FeatureDescriptionAvailability
VoicesCapture and use your own sung voice (remains private)All users
Custom ModelsFine-tune from your original compositions, up to 3 modelsPro/Premier subscribers
My TasteAdaptive learning of genre and mood preferencesAll users

These tools are presented as the foundation for next-generation models developed in partnership with the music industry.

🔗 Suno Blog v5.5


Mistral Voxtral TTS: 3-second voice cloning, open-weight

March 23 (press coverage March 26-27) — Mistral released Voxtral TTS, its first open-weight Text-to-Speech model. The voxtral-tts-2603 model has 4 billion parameters and supports 9 languages (English, French, German, Spanish, Dutch, Portuguese, Italian, Hindi, Arabic).

Announced latency is 70 ms for a typical setup (10-second sample + 500 characters). Voice cloning works from only 3 seconds of reference audio. In terms of naturalness, Mistral claims Voxtral surpasses ElevenLabs Flash v2.5 and reaches parity with ElevenLabs v3. The model can run on a mainstream laptop, a mid-range GPU or a high-end mobile device. Access: weights available on Hugging Face (Creative Commons license) and via the Mistral Studio API at $0.016 per 1,000 characters. Voice mode integration available in Le Chat.

🔗 Mistral Voxtral announcement


xAI: SuperGrok Lite at $10/month and 15-second video stories

March 25 — xAI announced two updates around Grok.

SuperGrok Lite: a new 10/monthsubscriptionplan,currentlyinlimitedtesting.Itincludeschatsessionstwiceaslongasthefreetier,anAIagent,andvideogenerationat480p(maximum6seconds).TheplansitsbetweenthefreetierandSuperGrokStandard(10/month subscription plan, currently in limited testing. It includes chat sessions twice as long as the free tier, an AI agent, and video generation at 480p (maximum 6 seconds). The plan sits between the free tier and SuperGrok Standard (30 per month).

Video stories via Grok Imagine: Grok Imagine now generates “video stories” of 15 seconds at 720p with synchronized audio, background music and sound effects. Elon Musk said he wants to “double down” on video. This announcement comes as OpenAI shut down Sora the same week.

🔗 Bloomberg: xAI doubling down on AI videos


Kimi/Moonshot AI considers an IPO in Hong Kong

March 26 — Moonshot AI, the Chinese company behind the Kimi model, is reportedly considering an IPO on the Hong Kong Stock Exchange, according to Bloomberg. Advisory banks would include CICC and Goldman Sachs, with a target valuation of about **18billionandongoingfundraisingofupto18 billion** and ongoing fundraising of up to 1 billion. Current shareholders include Alibaba, Tencent and 5Y Capital. The IPO timeline remains uncertain.

🔗 Bloomberg: Moonshot considers Hong Kong IPO


NVIDIA GTC: proprietary and open AI are not at odds

March 25 — At GTC 2026, NVIDIA and industry leaders (Mistral, Perplexity, Cursor, Reflection AI, LangChain) stated that the future of AI lies in the complementarity of open and proprietary models. Jensen Huang summarized NVIDIA’s position: “Proprietary versus open is not a thing. It’s proprietary and open.”

The blog post emphasizes the need for multi-model, multi-cloud and multimodal orchestration for enterprises. NVIDIA confirms its open source commitment by becoming the largest organization on Hugging Face.

🔗 NVIDIA Blog: AI Open and Proprietary


Claude Code v2.1.85 and v2.1.84: conditional hooks and PowerShell on Windows

March 26–27 — Two new versions of Claude Code were released.

v2.1.85 (March 27): introduces a conditional field if in hook configuration, allowing a hook to trigger only on certain commands or files. Also: environment variables for MCP helper scripts, timestamps in transcriptions for scheduled tasks, OAuth support compliant with RFC 9728, and the ability for PreToolUse hooks to satisfy AskUserQuestion by returning updatedInput.

v2.1.84 (~March 26): PowerShell on Windows available in preview (opt-in preview) as a native tool, hook TaskCreated (triggered when Claude creates a task), HTTP support for WorktreeCreate, and a return prompt shown after 75 minutes of inactivity. Fix for a permission bug on official plugin scripts on macOS/Linux.

🔗 Tweet @lydiahallie — v2.1.85


Z.ai GLM-5.1 available to all Coding Plan subscribers

March 27 — Z.ai (Zhipu) announces that GLM-5.1 is now accessible to all GLM Coding Plan subscribers, regardless of tier (Lite, Pro or Max).

🔗 PANews: GLM-5.1 Coding Plan


Genspark integrates Grok Imagine into its video agent

March 26 — Genspark integrated Grok Imagine (multi-image generation and video extension) into its Genspark AI Video Agent. Users can use up to 7 images to create a video, or extend an existing video up to 10 seconds.

🔗 Tweet @genspark_ai


Meta SAM 3.1: real-time tracking of 16 objects, doubled speed

March 27 — Meta releases SAM 3.1, an update to the Segment Anything Model 3 for object detection and tracking in real-time video. The core innovation is object multiplexing: instead of a separate pass per object, all objects are processed in a single forward pass. Result: up to 16 objects simultaneously, with speed increasing from 16 to 32 frames per second on an H100 GPU — doubling throughput — while significantly reducing memory usage.

The architecture relies on a Mux-Demux encoder/decoder that shares a single computation for all objects. A global reasoning mechanism improves accuracy in scenes with many elements. SAM 3.1 is a drop-in replacement for SAM 3 — no API changes required.

The model is published open source: checkpoint downloadable on Hugging Face, source code updated on GitHub, research paper released, and an interactive demo available.

🔗 Meta Blog: SAM 3.1 🔗 GitHub code 🔗 Tweet @AIatMeta


Meta TRIBE v2: a digital twin of human brain activity

March 26 — Meta publishes TRIBE v2, a predictive foundation model designed as a “digital twin” of neural activity. The model predicts high-resolution fMRI brain responses to almost any sound, image or text, with 70× higher resolution than previous approaches. Trained on data from over 700 volunteers, it operates zero-shot for new subjects, languages and tasks without retraining. The goal is to enable neuroscientists to test hypotheses quickly without human experiments and to accelerate research on neurological disorders. The model, code and paper are released under a CC BY-NC license.

🔗 Meta Blog: TRIBE v2 🔗 Tweet @AIatMeta


What this means

The Amsterdam court decision marks a regulatory turning point: it’s the first time a European jurisdiction has directly sanctioned xAI for content generated by Grok, with a deterrent fine. Combined with the European Parliament vote on sexual deepfakes the same day, this shapes a legal framework that will progressively apply to all generative AI providers.

On the developer tools side, the week illustrates the race for integration: Codex with its native plugins, Copilot resolving merge conflicts and surfacing agent activity in Issues/Projects, and Claude Code with conditional hooks. AI assistants are integrating more deeply into existing workflows rather than replacing them.

The restriction of access to Gemini Pro in the free CLI is a signal that the era of generous free access in CLI tools is starting to shrink. Like GitHub Copilot before it, Gemini is moving toward a freemium model where advanced capabilities require a subscription.

Finally, the contemplated IPO of Kimi in Hong Kong at $18 billion confirms the attractiveness of valuations in the Chinese AI sector, while Mistral, with Voxtral TTS, continues to position its open-weight models as an alternative to proprietary services in speech synthesis.


Sources

This document was translated from the French version into English using the gpt-5-mini model. For more information about the translation process, see https://gitlab.com/jls42/ai-powered-markdown-translator