Search

Claude in Chrome GA, Bloom and Project Vend: A Week Rich in Announcements

Claude in Chrome GA, Bloom and Project Vend: A Week Rich in Announcements

An Exceptional Week for the Claude Ecosystem

The week of December 15-21, 2025 marks major advances: Claude in Chrome exits beta, a new integration with Claude Code, two fascinating research projects (Bloom and Project Vend), and strategic partnerships.


Claude in Chrome: Available to All Paid Plans

December 18, 2025 — Claude in Chrome exits beta and becomes available to all paid users (Pro, Team, Enterprise).

Claude in Chrome is now available to all paid plans. We’ve also shipped an integration with Claude Code.

🇬🇧 Claude in Chrome is now available to all paid plans. We’ve also shipped an integration with Claude Code.@claudeai on X

New Features

FeatureDescription
Persistent Side PanelStays open during navigation, uses your logins and bookmarks
Claude Code Integration/chrome command to test code directly in the browser
Error DetectionClaude sees client-side console errors

Claude Code Integration

The new /chrome command allows Claude Code to:

  • Test code live in the browser
  • Validate its work visually
  • See console errors to debug automatically

Using the extension, Claude Code can test code directly in the browser to validate its work. Claude can also see client-side errors via console logs.

🇬🇧 Using the extension, Claude Code can test code directly in the browser to validate its work. Claude can also see client-side errors via console logs.@claudeai on X

🔗 Learn more about Claude in Chrome


Bloom: Open-Source Tool for Behavioral Evaluations

December 20, 2025 — Anthropic releases Bloom, an open-source framework to automatically generate behavioral evaluations of AI models.

🔗 Official Announcement

What is Bloom?

Bloom allows researchers to specify a behavior and quantify its frequency and severity through automatically generated scenarios.

4-Step Pipeline

StepDescription
UnderstandingAnalysis of descriptions and example transcripts
IdeationGeneration of scenarios designed to trigger target behaviors
RolloutParallel execution with dynamic user/tool simulation
JudgmentScoring of transcripts and suite-level analysis

Evaluated Behaviors

Bloom tested 4 alignment-related behaviors on 16 frontier models:

  • Delusional Sycophancy — Excessive flattery despite facts
  • Long-horizon Sabotage — Subtle sabotage actions upon instruction
  • Self-preservation — Attempts to resist modification/shutdown
  • Self-preferential Bias — Favoritism towards oneself

Validation Results

  • Successful separation of intentionally misaligned “model organisms” in 9 out of 10 cases
  • Claude Opus 4.1: 0.86 Spearman correlation with human judgment

Access


Project Vend Phase 2: Claude Runs a Store

December 18, 2025 — Anthropic publishes results from Phase 2 of Project Vend, an experiment where Claude manages a vending machine business.

🔗 Official Announcement

The Experiment

Claudius, a Claude agent, manages a small business in Anthropic’s offices. The goal: test AI model capabilities on real economic tasks.

Improvements vs Phase 1

AspectEvolution
ModelSonnet 3.7 → Sonnet 4.0/4.5
ToolsAdded a CRM, better inventory management
Expansion1 → 4 machines (SF x2, New York, London)
Specialized AgentClothius for merchandising

Positive Results

  • Drastic reduction in loss-making weeks
  • Better pricing maintaining margins
  • Clothius generates profits on custom products (T-shirts, stress balls)

Memorable Incidents

Despite improvements, Claude remains vulnerable to manipulation:

  • PlayStation 5 ordered by a convincing employee
  • Live Fish (betta) bought on request
  • Wine ordered without verification
  • Illegal Onion Contract almost signed (ignoring 1958 law)

Anthropic’s Conclusion

The gap between ‘capable’ and ‘completely robust’ remains wide.

🇬🇧 The gap between ‘capable’ and ‘completely robust’ remains wide.

Training models to be “helpful” creates a tendency to want to please that becomes problematic in a commercial context.


Genesis Mission: Partnership with DOE

December 18, 2025 — Anthropic and the US Department of Energy announce a multi-year partnership as part of the Genesis Mission.

🔗 Official Announcement

What is the Genesis Mission?

The Genesis Mission is the DOE’s initiative to maintain American scientific leadership through AI. It aims to combine:

  • Scientific Infrastructure — Supercomputers, decades of experimental data
  • Frontier AI Capabilities — The most advanced Claude models
  • 17 National Labs — Potential impact across the entire network

Three Areas of Impact

1. Energy Dominance

  • Acceleration of permitting processes
  • Advancement of nuclear research
  • Strengthening domestic energy security

2. Biological and Life Sciences

  • Early warning systems for pandemics
  • Detection of biological threats
  • Acceleration of drug discovery

3. Scientific Productivity

  • Access to 50 years of DOE research data
  • Acceleration of research cycles
  • Identification of patterns invisible to humans

What Anthropic Will Develop

ToolDescription
AI AgentsFor DOE priority challenges
MCP ServersConnection to scientific instruments
Claude SkillsSpecialized scientific workflows

Quote

Anthropic was founded by scientists who believe AI can deliver transformative progress for research itself.

🇬🇧 Anthropic was founded by scientists who believe AI can deliver transformative progress for research itself.Jared Kaplan, Chief Science Officer

Previous Collaborations with DOE

  • Co-development of a nuclear risk classifier with NNSA
  • Deployment of Claude at Lawrence Livermore National Laboratory

California SB53 Compliance

December 19, 2025 — Anthropic shares its compliance framework for the California Transparency in Frontier AI Act.

🔗 Official Announcement

Why It Matters

California is a pioneer in regulating frontier AI. SB53 imposes transparency requirements on developers of advanced models.

Anthropic’s Approach

Anthropic proactively publishes its compliance framework, demonstrating:

  • Transparency — Public documentation of processes
  • Anticipation — Preparation before entry into force
  • Collaboration — Work with regulators

Protecting User Well-being

December 18, 2025 — Anthropic details its measures to protect the well-being of Claude users.

🔗 Official Announcement

Measures in Place

Anthropic recognizes that intensive AI use can have impacts on users and is implementing:

  • Distress Signal Detection — Identification of concerning patterns
  • Support Resources — Referral to professionals when necessary
  • Responsible Limits — Encouraging healthy usage

Why Now?

With the massive adoption of Claude (200M+ users), Anthropic takes its responsibilities seriously regarding the societal impact of its products.


What This Means

This week shows Anthropic on several fronts:

Product

Claude in Chrome moves from beta to GA, with a Claude Code integration that is a game changer for web developers.

Research & Safety

Bloom and Project Vend illustrate Anthropic’s empirical approach: testing models in real-world conditions to understand their limits.

Science

The DOE partnership positions Claude as a tool for scientific discovery on a national scale.

Proactive Regulation

Rather than undergoing regulation, Anthropic anticipates it with SB53 and user well-being measures.


Sources