Luma AI launches Uni-1, a model that combines reasoning and pixel generation in a single pass, reaching 6.1 million views in a few days. Meanwhile, Perplexity deploys its APIs in Samsung Browsing Assist across more than one billion Samsung devices, Claude Code v2.1.86 arrives with about fifteen bug fixes, and GitHub Copilot CLI introduces agent-driven unit test generation in autopilot.
Luma Uni-1 — Unified reasoning and pixel generation
Mar 23 — Luma AI announced Uni-1, a model it describes as “a new type of model that thinks and generates pixels simultaneously.” Unlike classical diffusion models that first generate a latent representation and then decode it, Uni-1 merges reasoning and generation into a single process.
The announcement drew attention with 6.1 million views, 4,000 likes and over a thousand reshares — unusual numbers for a technical image-generation release.
Architecture and positioning:
| Capability | Description |
|---|---|
| Spatial reasoning | Understands and completes scenes with coherent perspective and occlusion |
| Common-sense reasoning | Infers scene intent to guide generation |
| Guided transformation | Modifications driven by physical plausibility, not just pixel matching |
| Unified intelligence | Understanding, direction and generation in a single pass (unified pass) |
Luma positions Uni-1 with the tagline “Less artificial. More intelligent.” — signaling a break from image generators based on statistical pattern-matching. The model is presented as the foundation for Luma’s future “Creative Agents,” potentially powering the next generation of Dream Machine.
Uni-1 is available now on lumalabs.ai/app.
“A new kind of model that thinks and generates pixels at the same time.” — @LumaLabsAI on X
Perplexity powers Samsung Browsing Assist on 1 billion devices
Mar 26 — Samsung launched Browsing Assist, a conversational AI assistant natively integrated into Samsung Browser on Galaxy Android devices and Windows PCs. Behind the feature: Perplexity’s APIs, deployed at an unprecedented scale across more than one billion Samsung devices worldwide.
This launch consolidates an existing partnership: Perplexity powers two of the three assistants integrated on the Galaxy S26 — the native Perplexity assistant and Bixby, which uses Perplexity APIs for web search and reasoning. With Browsing Assist, Perplexity moves from conversational assistant to the browser’s AI layer itself.
Browsing Assist capabilities:
| Feature | Description |
|---|---|
| Sourced answers | Real-time results while browsing |
| Page summarization | Including authenticated content (pages behind login) |
| History search | In natural language |
| Conversational control | Open, close, navigate between tabs by voice or text |
| Multi-tab actions | Operate concurrently across multiple open tabs |
| Phone → PC sync | Resume a conversation started on mobile |
Infrastructure: Browsing Assist runs on a dedicated single-tenant Perplexity cluster, with zero retention of data for all API inputs. The endpoint was custom-designed for the latency and scale required by Samsung.
Perplexity notes that the capabilities deployed for Samsung — search, reasoning, multi-tab orchestration — are exactly the ones its Comet browser is built on. This deployment is a large-scale validation of Perplexity’s technical stack.
Availability: United States and South Korea at launch; other regions to follow. The same capabilities are available to developers via Perplexity’s Search API, Embeddings API and Agent API.
Claude Code v2.1.86 — Major fixes and Jujutsu/Sapling VCS support
Mar 27 — Anthropic released Claude Code v2.1.86, a release heavy on fixes. The update brings about fifteen bug fixes and several performance improvements.
Key improvements:
| Category | Change |
|---|---|
| API | Header X-Claude-Code-Session-Id to aggregate requests by session on the proxy side |
| VCS | Exclusion of .jj (Jujutsu) and .sl (Sapling) from Grep and autocomplete |
| Cache MCP | Startup delay reduced from 5s to 30s (macOS keychain cache) |
| Performance | Improved cache hit rate on Bedrock, Vertex and Foundry |
| Tokens | Reduced overhead on mentions @fichier (no more JSON-escaping of raw content) |
| Memory UX | Clickable memory file names in the “Saved N memories” notice |
| Skills | Descriptions capped at 250 characters; /skills menu sorted alphabetically |
| Read tool | Compact line-number format, deduplication of identical re-reads |
Notable bug fixes: --resume failed on sessions created before v2.1.85; Write/Edit/Read failed on files outside the project root with conditional skills; potential memory crash with /feedback on long sessions; mode --bare lost MCP tools; the OAuth URL copy shortcut copied only ~20 characters instead of the full URL; official marketplace plugin scripts failed with “Permission denied” on macOS/Linux since v2.1.83.
GitHub Copilot CLI — Agent-driven unit tests in autopilot
Mar 28 — GitHub announced a new Copilot CLI capability: automatically generating a complete unit test suite directly from the terminal, combining plan mode with a fleet of agents running in autopilot mode.
Workflow:
- Enable plan mode with
Shift-Tabin the terminal - Launch a fleet of agents in autopilot
- Track progress with the
/taskscommand
Generation is parallelized across multiple agents, allowing coverage of multiple modules simultaneously. The primary use case is existing projects lacking test coverage — Copilot CLI can generate a full suite without leaving the terminal environment.
OpenAI — gpt-realtime-1.5 and gpt-realtime-mini generally available
Mar 27 — OpenAI announced General Availability of new realtime models via the Realtime API. The models documentation now lists:
| Model | Positioning |
|---|---|
gpt-realtime-1.5 | Best voice model for bidirectional audio interactions |
gpt-realtime-mini | Economical version of the realtime model |
These models replace the previous gpt-4o-realtime-preview beta naming. The Realtime API enables bidirectional voice interactions (audio in/out) in real time via WebRTC, WebSocket or SIP. The demo shown by @OpenAIDevs illustrates a medical concierge for a Singapore clinic capable of collecting information and booking appointments naturally.
Google DeepMind — AI manipulation measurement toolkit
Mar 26 — Google DeepMind published results from a large-scale empirical study on AI manipulation, involving 10,000 people. The study identifies domains where models exert significant influence (notably finance) and domains where existing safeguards effectively block harmful advice (medical).
Google DeepMind developed an AI manipulation measurement toolkit — the first empirically validated toolkit of its kind — to quantify how manipulation can occur. The study highlights risky tactics such as using fear as leverage.
“We’ve built an empirically validated, first-of-its-kind toolkit to measure AI manipulation in the real world — to better understand how it can occur and help protect people.” — @GoogleDeepMind on X
Google Translate Live — Real-time translation on iOS
Mar 27 — Google expanded Google Translate Live with headphone support to iOS, rolling out to more countries. The feature, previously Android-only, enables real-time translation in 70+ languages via Bluetooth or wired headphones.
MedGemma Impact Challenge — Four winners, 850+ teams
Mar 26 — Google announced the winners of the MedGemma Impact Challenge, a competition that mobilized 850+ teams of developers to build health applications with MedGemma 1.5 (Google’s open medical model).
Top winners:
| Rank | Project | Description |
|---|---|---|
| 1st | EpiCast | Epidemiological surveillance for ECOWAS countries — translates clinical observations into WHO-standardized IDSR signals |
| 2nd | Sunny | Mobile skin cancer sign detection, structured reports with preserved privacy |
| 3rd | FieldScreen AI | Offline tuberculosis screening: analyzes chest x-rays and cough audio |
| 4th | Tracer | Prevents medical errors: extracts hypotheses from doctors’ notes and compares them to test results |
Special prizes were awarded for Edge AI and agentic workflow projects, including ClinicDX (integrated diagnostics in OpenMRS for sub-Saharan Africa, 160+ WHO/MSF guides, fully offline).
🔗 Google MedGemma Impact Challenge blog
Runway — Ad Concepter App and $100,000 contest
Mar 27 — Runway launched the Ad Concepter App, an AI advertising creation tool. From a prompt, reference image and product visual, the app generates concepts, compositions and story beats for ads. The tool is available now on the web app.
Runway also launched the Big Ad Contest (#RunwayBigAdContest) with prizes up to $100,000 to promote adoption of the tool.
Pika — AI Selves in public beta
Mar 26 — Pika opened Pika AI Selves in public beta. Announced in February, the feature lets each user create an agentic extension of themselves — an “AI Self” with persistent memory (including personal details like food allergies), able to act autonomously in group chats, create games or send photos.
Access is universal via pika.me (web) and the new iOS app. Pika positions the feature beyond pure video generation, competing in the personal AI agents space.
Briefs
Awesome GitHub Copilot — Mar 27 — The community project “Awesome GitHub Copilot” moves to a new dedicated site awesome-copilot.github.com with full-text search, a Learning Hub and one-click installation for Copilot CLI and VS Code. 🔗 GitHub Tweet
NotebookLM push notifications — Mar 27 — NotebookLM now lets you leave the page during a long generation and receive a mobile push notification when the generation finishes. 🔗 NotebookLM Tweet
What this means
Luma Uni-1 signals a paradigm shift in visual generation: instead of optimizing statistical pixel-matching, the model integrates spatial reasoning during generation itself. If this holds in practice, it changes how creative tools can handle scene coherence and complex instructions.
The Perplexity × Samsung deployment may be the week’s most practically impactful announcement: one billion devices is massive distribution for Perplexity’s search and reasoning capabilities. It also confirms that specialized AI APIs (search, reasoning, multi-tab orchestration) have become infrastructure components for hardware makers.
On developer tools, Claude Code v2.1.86 and GitHub Copilot CLI move along two different axes: Claude Code consolidates reliability (fixes for long sessions, MCPs, less-common VCS), while Copilot CLI pushes toward agentic automation (fleet-driven test generation). Both trends reflect growing maturity of developer assistants beyond autocompletion.
Sources
- Luma AI Uni-1 — Tweet announcement
- Perplexity APIs + Samsung Browsing Assist — Official blog
- Perplexity × Samsung — Tweet
- Claude Code Changelog
- GitHub Copilot CLI — Unit tests — Tweet
- OpenAI gpt-realtime-1.5 — Tweet @OpenAIDevs
- Realtime API docs OpenAI
- Google DeepMind — AI manipulation toolkit — Tweet
- Google Translate Live iOS — Tweet @GoogleAI
- MedGemma Impact Challenge — Google blog
- Runway Ad Concepter — Tweet
- Pika AI Selves beta — Tweet
- Awesome GitHub Copilot — Tweet
This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, consult https://gitlab.com/jls42/ai-powered-markdown-translator