Responsibility LedgerAppend-only · Dated · Signed

Entry 002 · April 22, 2026 · 6 min read

Seventy-five percent AI-generated. Thirty billion in four months. And a federal framework for infrastructure no one asked to grade.

Google claims 75% of its code is now AI-generated. Anthropic claims $30B revenue run rate. NIST released a critical infrastructure AI risk profile. Each claim comes with a grading horizon and an invalidator.

Signed — Roger Grubb, Editor


The largest tech companies in the world are making claims about what AI can do, what it earns, and what guardrails the government will impose on it. Today—April 22, 2026—three of those claims went on the record in a way that invites grading.

Google's CEO told the world that 75% of the company's internal code is now AI-generated. Anthropic disclosed a $30 billion annualized revenue run rate, up from $9 billion at year-end 2025. And the National Institute of Standards and Technology released a concept note for an AI Risk Management Framework profile aimed at critical infrastructure operators who want to deploy AI in high-stakes environments.

Each of these is a claim. Each can be graded. That is the purpose of this ledger.

3 Claims

Claim 1 — Google: 75% of new code AI-generated

On April 22, 2026, Google CEO Sundar Pichai stated that "75% of all new code at Google is now AI-generated and approved by engineers, up from 50% last fall." The claim was made during the company's Cloud Next conference in Las Vegas.

The claim is verifiable if Google continues to report this metric in earnings calls or public statements. The key invalidator would be evidence—via internal leaks, third-party analysis of Google's repositories, or a company retraction—that the figure was materially overstated or based on counting methodology that inflates AI contribution (for example, counting trivial autocompletions as "generated").

Grade by: 2026-10-22 (6 months)
Invalidator: Google retracts or materially revises the 75% figure, or credible third-party analysis (e.g., from engineering trade publications or security researchers with access to Google codebases) demonstrates that AI contribution to committed code is below 60%.

Claim 2 — Anthropic: $30B annualized revenue run rate

On April 7, 2026, Anthropic stated its revenue run rate "has now topped $30 billion, up from $9 billion at the end of 2025," according to statements from CFO Krishna Rao reported by Bloomberg and Yahoo Finance. The company also disclosed that "more than 1,000 business customers" are spending over $1 million annually, a figure that "has more than doubled since February."

Run-rate revenue is an annualized projection based on recent monthly performance. It is not the same as trailing twelve-month revenue. The claim is gradeable when Anthropic files its first public financial statements (if it proceeds with a rumored late-2026 IPO) or when credible financial reporting confirms full-year 2026 revenue figures.

Grade by: 2027-04-30 (1 year)
Invalidator: Anthropic's actual reported revenue for full-year 2026 (when disclosed via IPO filings, investor letters, or financial press with named sources) is below $20 billion, or the $30 billion figure is retracted or described by the company as based on non-recurring contract bookings rather than recurring run-rate.

Claim 3 — NIST: Critical infrastructure AI risk profile released

On April 7, 2026, the National Institute of Standards and Technology released a concept note for an AI Risk Management Framework Profile on Trustworthy AI in Critical Infrastructure, which "will guide critical infrastructure operators towards specific risk management practices to consider when engaging AI-enabled capabilities."

The concept note is real and public. The claim being graded is not whether NIST released the note—it did—but whether the profile described in the note will be operationalized and adopted by critical infrastructure operators in a measurable way by April 2027. Adoption can be measured by: (a) publication of a final AI RMF Critical Infrastructure Profile by NIST, (b) at least three federal agencies or critical infrastructure operators publicly citing the profile in procurement requirements or policy documents, or (c) industry groups (e.g., sector-specific ISACs) formally adopting the profile's recommendations.

Grade by: 2027-04-07 (1 year)
Invalidator: No final AI RMF Critical Infrastructure Profile is published by NIST by April 2027, or fewer than two federal agencies or critical infrastructure operators publicly reference adoption of the profile's guidance in binding policy or procurement documents.

2 Reckonings

Reckoning 1 — New York RAISE Act enforcement (projected December 2025)

In December 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law. The law mandates that frontier AI developers with over $500 million in revenue publish safety plans and report critical safety incidents within 72 hours, with the law taking effect January 1, 2027.

At the time of signing, the implicit projection from advocates and bill sponsors was that the law would serve as a model for other states and that compliance frameworks would be in place well before the effective date. Bill sponsor Alex Bores stated the law "raised the floor for what AI safety legislation can look like," and that it would withstand federal preemption challenges despite a December 2025 White House executive order directing the DOJ to challenge state AI laws.

What happened: As of April 2026, the RAISE Act remains on the books and no successful federal preemption lawsuit has invalidated it. However, the December 2025 Trump executive order "Ensuring a National Policy Framework for Artificial Intelligence" explicitly aims to "preempt state AI laws that are deemed by the Trump administration to be inconsistent" with federal policy, and legal challenges are ongoing. Colorado delayed implementation of its own AI Act from February to June 2026, and legal analysis suggests that "while sweeping in ambition, the EO does not impact obligations under existing state AI laws" and that "companies developing and distributing AI offerings should continue to comply with all existing state AI requirements."

Grade: B
The law survived to its effective date without being struck down, but the compliance environment remains uncertain and fragmented. The "floor" metaphor has not held—other states have not rushed to replicate the RAISE Act, and the federal preemption fight is unresolved.

Invalidator that would have changed the grade to C or F: A federal court enjoining enforcement of the RAISE Act prior to its effective date, or New York itself delaying or repealing the law in response to federal funding threats.

Reckoning 2 — Adversarial distillation as coordinated threat (projected early 2025)

In early 2025, frontier AI labs including OpenAI, Anthropic, and Google began quietly investigating reports that Chinese AI companies were using adversarial distillation—querying Western models at scale to train imitation models—at industrial scale. The implicit claim at the time, according to industry observers and AI safety researchers, was that this was a significant but manageable threat that individual labs could address through rate-limiting and anomaly detection.

What happened: On April 6-7, 2026, OpenAI, Anthropic, and Google announced they are "sharing intelligence through the Frontier Model Forum to stop Chinese AI companies from stealing their models via adversarial distillation," marking "the first coordinated defense operation between all three frontier labs."

Anthropic disclosed it had documented "16 million of these exchanges from three Chinese companies alone, running through approximately 24,000 fraudulently created accounts."

The shift from individual lab defenses to coordinated intelligence-sharing through the Frontier Model Forum represents a material escalation in threat perception. The original assumption—that adversarial distillation was a nuisance addressable through technical means—was wrong. It required cartel-like coordination between competitors.

Grade: C
The threat was real and growing, but the early response posture (individual lab defenses) was insufficient. Coordination happened, but only after the threat reached a scale (16 million unauthorized queries, 24,000 fraudulent accounts) that individual labs could not contain. The delay between threat identification (early 2025) and coordinated response (April 2026) suggests the original risk assessment underestimated both the adversary's capacity and the labs' willingness to share intelligence.

Invalidator that would have changed the grade to A or B: Evidence that the labs began formal intelligence-sharing in Q1 or Q2 2025, or that adversarial distillation attempts were successfully throttled before reaching 10 million queries across the three labs.

1 Refusal

I refused to use the phrase "Anthropic overtakes OpenAI" as a headline or framing device for Claim 2, even though it appeared in dozens of articles published this week and would have been more clickable than the claim I actually filed.

The reason: the claim that matters is not the horse-race ranking. It is whether Anthropic's disclosed $30 billion annualized run rate is accurate and sustainable. Using "overtakes" as the frame nudges the reader toward treating this as a competitive sports story rather than a financial claim subject to verification. The run-rate number is what we will grade in April 2027. The ranking is narrative packaging.

I refused to let the packaging become the claim.

— Roger Grubb, Editor


Sources


The next entry lands at 5:30 AM Pacific.

3 Claims. 2 Reckonings. 1 Refusal. Every weekday. Dated, signed, append-only.