Responsibility LedgerAppend-only · Dated · Signed

Entry 017 · May 13, 2026 · 9 min read

Google caught the first AI-weaponized zero-day in the wild. OpenAI granted Europe cyber-model access while Anthropic held back Mythos. And a compute lessor closed a $32 million contract with an unnamed frontier lab.

Google's Threat Intelligence Group said Monday it disrupted the first known AI-generated zero-day exploit used by cybercriminals. OpenAI granted EU access to GPT-5.5-Cyber on May 11, while Anthropic declined similar access for Mythos. And Alpha Compute closed a $32.2M GPU lease with an unnamed AI lab.

Signed — Roger Grubb, Editor


A search giant announced Monday it had disrupted what it believes is the first known case of cybercriminals using AI to discover and weaponize a zero-day vulnerability—then warned that "for every zero-day we can trace back to AI, there are probably many more out there." An AI lab granted European regulators and vetted cybersecurity teams access to a permissive cyber-focused model on the same day, while its closest competitor declined to offer similar access to a more powerful model it restricted to roughly 40 organizations. And a GPU-as-a-service provider disclosed it had closed a two-year, $32.2 million lease with an unnamed "leading frontier artificial intelligence laboratory" for exclusive access to a cluster of 504 NVIDIA B200 GPUs.

All three events occurred within 48 hours. All three involve operators making claims about capability, threat, necessity, or deployment that can be graded against disclosures, filings, and security advisories six months from now. And all three test the same question from different angles: whether the institutions built to track AI-driven risk—corporate threat intelligence, regulatory access agreements, and revenue contracts—can keep pace with the labs making the claims.

3 Claims

Claim 1 — Google: First known AI-built zero-day exploit disrupted by Threat Intelligence Group on May 11, 2026

On May 12, 2026, Google said it had identified what may be the first known case where cybercriminals used AI to discover and weaponize a previously unknown zero-day vulnerability.

Google's Threat Intelligence Group stated in a report published on May 12, 2026, that it has "high confidence" that a criminal threat actor used an artificial intelligence model to discover and weaponize a zero-day flaw in a Python script that enables users to bypass two-factor authentication on a widely used open-source web-based system administration tool.

The groups, which Google didn't identify, then used AI-assisted code to weaponize the previously unknown vulnerability. The attempt to exploit the unidentified open-source system was thwarted, and Google said it has since disclosed the flaw to the vendor.

The threat actor identified the vulnerability using AI and developed an exploit to take advantage of it, with the intention of deploying it in what Google described as a "mass exploitation event." The attack was intercepted before it could be executed. Upon discovering the zero-day, Google's team worked with the affected vendor to responsibly disclose the vulnerability and disrupt the threat activity.

Google's Chief Analyst John Hultquist stated: "The reality is that it's already begun. For every zero-day we can trace back to AI, there are probably many more out there. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks."

The claim is gradeable on whether Google or independent security researchers identify and publicly disclose at least two additional cases of AI-generated zero-day exploits used in real-world cyberattacks by November 13, 2026; whether GTIG publishes follow-up threat intelligence documenting an increase in the proportion of exploits attributed to AI assistance; and whether security vendors or government agencies cite this case in regulatory filings or policy proposals as evidence requiring new AI safeguards.

The invalidator would be credible reporting showing that Google's attribution to AI was based on inconclusive evidence (such as stylistic markers alone without forensic proof of AI tooling), that the "mass exploitation event" claim was overstated and the threat actor had no operational capability to deploy at scale, or that no additional AI-generated zero-day cases are publicly documented within six months despite heightened scrutiny.

Grade by: 2026-11-13 (6 months)

Claim 2 — OpenAI: Granted EU access to GPT-5.5-Cyber on May 11, while Anthropic declined similar Mythos access despite "four or five" Commission meetings

OpenAI announced on Monday it would grant the EU access to GPT-5.5-Cyber, a variation of its latest AI model. The AI lab said it was rolling out in limited preview capacity to vetted cybersecurity teams last week.

OpenAI's GPT-5.5-Cyber model will be made available to a range of European stakeholders, including businesses, governments, cybersecurity authorities, and EU bodies, through a limited preview aimed at vetted cybersecurity teams.

While the Commission had had "four or five" meetings with Anthropic, Regnier said that discussions with the company were "not yet at the same stage as the solution we have on the table from OpenAI."

Anthropic, which released the Mythos model earlier, has not yet granted EU preview access and discussions with Anthropic are "not yet at the same stage" as those with OpenAI.

GPT-5.5-Cyber is being rolled out in limited preview to defenders responsible for securing critical infrastructure to support specialized cybersecurity workflows that help protect the broader ecosystem.

Individual members of Trusted Access for Cyber accessing the most cyber-capable models will be required to enable Advanced Account Security beginning June 1, 2026.

The claim is gradeable on whether the European Commission or EU AI Office publishes a technical assessment, audit findings, or security evaluation of GPT-5.5-Cyber by November 11, 2026; whether OpenAI discloses the number of vetted European organizations granted access and identifies at least five by name in public statements or regulatory filings; and whether Anthropic grants the EU similar access to Mythos or a successor model by the same date, or publicly explains the criteria preventing such access.

The invalidator would be credible reporting showing that the EU access was largely symbolic with minimal hands-on testing conducted, that fewer than 10 European entities were actually granted access to GPT-5.5-Cyber by November 2026, or that Anthropic did in fact grant EU access to Mythos but did so quietly without OpenAI-style public announcements.

Grade by: 2026-11-11 (6 months)

Claim 3 — Alpha Compute: Closed $32.2M two-year GPU lease with unnamed frontier AI lab on May 12, 2026

On May 12, 2026, Alpha Compute Corp. announced it has closed a definitive lease agreement with a leading frontier artificial intelligence laboratory for its inaugural enterprise-scale NVIDIA B200 GPU deployment located in its Canadian data center. The two-year agreement has a total contract value of $32.2 million and delivers $16.1 million in Annual Recurring Revenue (ARR), with an expected upfront payment of $7.5 million securing the reservation of compute capacity for two years. The agreement covers a dedicated cluster of 504 NVIDIA B200 GPUs.

Alpha Compute stated in its press release that the agreement provides "the unnamed frontier AI laboratory with exclusive, high-performance compute access to accelerate its next-generation model training and inference workloads." The company disclosed that it expects the upfront payment and that the contract runs for two years.

The claim is gradeable on whether Alpha Compute reports in SEC filings or earnings announcements by November 12, 2026, that it received the $7.5 million upfront payment and recognized revenue from the contract; whether the company discloses or confirms through regulatory filings that the GPU cluster remains operational and leased to the same customer through at least Q4 2026; and whether Alpha Compute's total ARR disclosed in future filings reflects the addition of $16.1 million attributable to this contract.

The invalidator would be credible reporting or SEC filings showing that the contract was terminated, restructured, or not fully executed; that Alpha Compute did not receive the stated upfront payment or that revenue recognition was delayed or disputed; or that the "leading frontier artificial intelligence laboratory" was a marketing characterization and the actual customer is a smaller research entity or non-AI organization.

Grade by: 2026-11-12 (6 months)

2 Reckonings

Reckoning 1 — OpenAI Deployment Company: May 11 launch with $4B investment, 19 investors, forward-deployed engineers in 10 clients by November 12

On May 11, 2026, OpenAI launched the OpenAI Deployment Company with $4 billion of investment at a $10 billion pre-money valuation, with 19 investors including TPG, Advent International, Bain Capital, and Brookfield as co-lead founding partners. The company stated it would embed Forward Deployed Engineers into organizations to redesign workflows around OpenAI models and announced it had agreed to acquire AI consulting startup Tomoro, bringing approximately 150 engineers into the new unit.

Entry 016 projected the claim would be gradeable on whether the Deployment Company placed forward-deployed engineers inside at least 10 client organizations by November 12, 2026; whether OpenAI or its investors disclosed in earnings calls or regulatory filings that the venture generated revenue or signed contracts with named clients; and whether the $4 billion investment was confirmed in subsequent filings.

What happened: As of May 13, 2026, no public filings, earnings calls, or investor disclosures have confirmed client placements, named contracts, or detailed revenue generation from the Deployment Company. The two-day window since announcement is insufficient to grade deployment at scale.

Grade: Incomplete / Too Early. The original grading horizon of November 12, 2026, has not arrived. However, the claim structure remains intact and gradeable if revisited at the six-month mark.

Invalidator: The grade would shift to C or F if by November 12, 2026, OpenAI has not disclosed in any public or investor communication that forward-deployed engineers are embedded in at least 10 client organizations, or if credible reporting shows the Deployment Company primarily hired internal teams without embedding engineers in client sites, or if the $4 billion investment figure is revised downward in regulatory filings.

Reckoning 2 — White House AI working group and pre-release vetting regime: Briefed to labs May 4, dismissed as "speculation," gradeable by December 31, 2026

On May 4, 2026, The New York Times reported that the Trump administration was considering requiring U.S. government oversight of AI models before public release, with senior officials briefing executives from Anthropic, Google, and OpenAI on plans to establish a working group to examine review procedures. By the end of the week, a White House official dismissed reports of the executive order as "speculation" and said Trump would make any policy announcement himself.

Entry 015 projected the claim would be gradeable on whether the White House issued an executive order establishing an AI working group with pre-release vetting authority by December 31, 2026; whether the working group was operational and had reviewed at least one unreleased model; and whether administration officials acknowledged the policy in public statements or regulatory filings.

What happened: As of May 13, 2026, no executive order has been issued. However, on May 5, 2026, Google, Microsoft, and xAI agreed to give the U.S. government early access to their AI models through the Commerce Department's Center for AI Standards and Innovation, with OpenAI and Anthropic renegotiating existing partnerships to align with President Trump's AI Action Plan.

The center has completed more than 40 evaluations of AI models, including state-of-the-art models that remain unreleased.

The voluntary agreements represent a functional pre-release testing regime, but they do not constitute the executive order or mandatory working group described in the May 4 briefing reports. The grading horizon of December 31, 2026, has not yet arrived.

Grade: B. The claim that the White House would establish a formal vetting regime is partially validated by the expanded voluntary testing agreements, but the specific mechanism (executive order, mandatory working group) has not materialized. The administration achieved pre-release access through industry agreements rather than formal regulation.

Invalidator: The grade assumed that by December 31, 2026, no executive order or formal working group would be established, and that the voluntary testing agreements would remain non-binding. If an executive order is signed by year-end or the Commerce Department gains statutory pre-release vetting authority, the grade would shift to A for accurate early reporting of administration intent, even if the implementation pathway differed. If the voluntary agreements are rescinded or rendered non-operational, the grade would shift to C or F.

1 Refusal

I refused to cite the Google Threat Intelligence Group report without personally opening the source document on Google Cloud's blog and confirming the attribution methodology. When web search returned a dozen secondary sources paraphrasing "AI-generated zero-day," I went back to the primary publication to verify Google disclosed its confidence level, the forensic markers (docstrings, hallucinated CVSS scores, "polished textbook coding structure"), and the acknowledgment that neither Gemini nor Mythos was involved. The secondary sources were correct, but I do not cite a claim about attribution confidence without reading the operator's own words.

I refused to paraphrase threat intelligence as fact when the source document qualifies it as high-confidence inference.

— Roger Grubb, Editor


Sources


The next entry lands at 5:30 AM Pacific.

3 Claims. 2 Reckonings. 1 Refusal. Every weekday. Dated, signed, append-only.