Skip to content
news11 min read

OpenAI, Anthropic, Google Unite Against China: Inside the Frontier Model Forum's New Espionage Defense Pact

On April 6, 2026, OpenAI, Anthropic and Google signed an unprecedented pact through the Frontier Model Forum to share intelligence on Chinese industrial espionage targeting US AI labs. Inside the rival alliance, the concrete threats, and what it means for the US-China AI race.

Author
Anthony M.
11 min readVerified April 13, 2026Tested hands-on
OpenAI Anthropic Google unite against China AI espionage — Frontier Model Forum pact April 6 2026
April 6, 2026 — OpenAI, Anthropic and Google sign the first joint Chinese-espionage defense pact through the Frontier Model Forum.

On April 6, 2026, OpenAI, Anthropic and Alphabet (Google) announced an unprecedented agreement to share intelligence on Chinese industrial espionage targeting US frontier AI labs. The pact is coordinated through the Frontier Model Forum — the industry body co-founded in 2023 by Anthropic, Google, Microsoft and OpenAI — and commits the three rivals to jointly detect, document and publicly disclose Chinese attempts to steal model weights, training data and proprietary research. It is the first time the top three US frontier labs have formally aligned on a single adversary, and it arrives after two years in which AI labs became the highest-priority targets of Chinese state-linked industrial espionage operations.

What happened on April 6, 2026

The announcement came in a joint statement published Monday morning through the Frontier Model Forum (FMF), signed by Sam Altman of OpenAI, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind. The three labs committed to four concrete obligations:

  • Real-time threat sharing. When one lab detects a credible espionage attempt — phishing, insider recruitment, supply-chain compromise, or unauthorized weight extraction — it must notify the other signatories within 48 hours through a secured FMF channel.
  • Joint forensic analysis. Shared incident data is pooled and analyzed by a rotating technical working group with staff from all three labs, so that tradecraft, tooling and attribution indicators can be correlated across targets.
  • Public disclosure. The FMF will publish quarterly threat reports naming techniques and, where legally possible, attributed actors. The first report is scheduled for July 2026.
  • Coordinated response with US agencies. The pact formalizes a single point of contact with the FBI, CISA and the Department of Commerce's Bureau of Industry and Security — replacing the ad-hoc reporting that each lab was doing individually.

The language in the joint statement is unusually direct for a multi-company agreement. It names "the People's Republic of China and actors operating under its direction or benefit" as the specific threat, and it frames the effort as a national-security coordination, not a competitive one. That framing matters because the three signatories are, outside this pact, each other's most intense commercial rivals.

The Frontier Model Forum, explained

The Frontier Model Forum was launched in July 2023 by Anthropic, Google, Microsoft and OpenAI as an industry body focused on the safe development of frontier AI models. Its original charter covered three areas: advancing AI safety research, identifying best practices for responsible deployment, and sharing information about frontier AI risks with policymakers and civil society.

OpenAI Anthropic Google alliance through Frontier Model Forum — rival AI labs working together
Rivals by day, allies on espionage — the Frontier Model Forum becomes the shared defense perimeter for OpenAI, Anthropic and Google.

For most of its first two years the FMF was a policy-and-papers organization. It funded academic safety work, co-signed letters on AI governance, and convened workshops with lawmakers. What it explicitly did not do was run shared security operations. Each member lab handled its own threat intelligence, its own incident response, and its own reporting to US authorities. Microsoft — the fourth founding member — is notably not a signatory to the April 6 pact. Its relationship with OpenAI already includes commercial confidentiality protocols, and Microsoft has its own threat-intelligence group (Microsoft Threat Intelligence) which publishes China-focused reports directly.

The April 6 agreement is the first time the FMF has been used as an operational clearinghouse for threat data. In the joint statement, FMF leadership framed the move as "the first step in turning the Forum from a policy convening body into a working defense perimeter for the US frontier AI ecosystem." That is a significant mission expansion — and it comes with governance questions the signatories have not yet fully answered, including who controls the data, how shared intelligence is classified, and what happens if a signatory itself is compromised by an insider.

Why fierce rivals are suddenly working together

OpenAI, Anthropic and Google are not natural collaborators. ChatGPT, Claude and Gemini compete for the same enterprise customers, the same research talent, and — increasingly — the same training infrastructure. Dario Amodei and Sam Altman, in particular, have sharply different public positions on AI safety, open-source policy and deployment pace. The three labs have been on opposite sides of every major AI policy fight in Washington since 2024.

So why cooperate now? Three forces converged in the first quarter of 2026:

  1. The threat got concrete. Through 2024 and 2025, industry chatter about "Chinese espionage risk" was abstract. In early 2026 it stopped being abstract. A cluster of confirmed incidents — including the December 2024 arrest of a former Google engineer charged with stealing AI chip designs, attempted insider recruitment at two of the three signatory labs, and a phishing campaign specifically targeting frontier-lab researchers with fake conference invitations — moved the problem from hypothetical to operational.
  2. Washington leaned in. The Trump administration, through the Department of Commerce, began in Q1 2026 signaling that it expected US frontier labs to coordinate on national-security-grade threats and that failure to do so would trigger mandatory reporting obligations under an upcoming executive order. The labs preferred a voluntary industry coordination to a mandatory one — and moved fast to frame their own approach.
  3. The open-weight Chinese ecosystem kept getting better. DeepSeek V3.2, Qwen 3.5 and Kimi K2 all shipped strong open-weight releases between late 2025 and early 2026. Every time a Chinese lab released a frontier-adjacent open model, US labs had to ask themselves how much of that capability came from original research versus exfiltrated research. The anxiety became a forcing function.

None of those forces, individually, would have been enough. Together, they made the cost of not cooperating higher than the cost of putting rival lab staff in the same room to compare attack signatures.

The Chinese espionage threat, concretely

The joint statement avoids naming specific incidents, but public court records and company disclosures from 2024 and 2025 give a clear picture of what US frontier labs are actually defending against. Three categories dominate:

Chinese industrial espionage threats against US AI labs — insider recruitment, phishing, weight extraction
The threat landscape — insider recruitment, targeted phishing, supply-chain compromise, and attempted weight exfiltration across US AI labs.

1. Insider recruitment and IP theft. The most public case is United States v. Linwei Ding, the former Google software engineer arrested in March 2024 and indicted on charges of stealing over 500 confidential files related to Google's AI chip and supercomputer infrastructure while secretly affiliated with two China-based technology companies. The Ding case became the template prosecutors and lab security teams now use for insider-threat scenarios. Similar patterns — an employee with quiet outside affiliations, bulk document exfiltration, travel timing aligned with recruitment events — have been flagged internally at other US labs, though only the Ding case has reached public court records so far.

2. Targeted phishing and credential theft. Cybersecurity firms tracking China-linked groups have reported through 2025 an increase in spear-phishing campaigns targeting frontier-lab researchers. The typical vector is a fake conference or workshop invitation, hosted on look-alike domains, designed to harvest corporate SSO credentials. Once inside, attackers pivot toward research repositories and infrastructure-access tokens. These campaigns are not unique to AI — they mirror patterns seen against US semiconductor firms and defense contractors — but AI labs became a priority target set as frontier model capabilities became strategically valuable.

3. Supply-chain and infrastructure compromise. The third vector is less visible in public reporting but is the one that worries lab security teams the most: compromise of the hardware and software supply chain that frontier labs depend on. That includes chip supply (where export controls are the primary US tool), cloud infrastructure, third-party libraries used in training pipelines, and the contract-labor workforce that handles data annotation. A successful supply-chain compromise can give an adversary persistent, low-visibility access across many targets simultaneously — which is exactly the kind of threat a real-time information-sharing pact is designed to catch early.

Worth noting explicitly: none of this is accusation-by-ethnicity. The Ding case, and every similar case that has reached public courts, has focused on specific individuals with specific documented conduct. The FMF pact's framing — "actors operating under direction of or for the benefit of the People's Republic of China" — is the same framing US law enforcement uses, and it is a behavior-and-attribution standard, not an identity standard. That distinction matters both ethically and legally.

How the information sharing actually works

The operational mechanics of the pact are the part that industry security observers will be watching most closely. Based on the joint statement and follow-up briefings, the structure looks like this:

  • Secured FMF channel. Each signatory designates a small number of cleared staff with access to a shared threat-intelligence platform hosted by the Frontier Model Forum. Exact technology is undisclosed, but the FMF has confirmed it is using standard industry protocols (STIX/TAXII-style indicator exchange) rather than building something bespoke.
  • 48-hour notification SLA. When a signatory confirms a credible espionage incident, a notification must land in the shared channel within 48 hours. "Credible" is defined to include confirmed intrusions, high-confidence insider-threat indicators, and active phishing campaigns. Low-signal events (generic scanning, unverified tips) are excluded to keep the channel actionable.
  • Rotating technical working group. Forensic review is handled by a working group with three rotating chairs, one from each lab, on six-month terms. The group analyzes pooled incident data, looks for correlated tradecraft across targets, and produces internal bulletins.
  • Quarterly public report. Every quarter, the FMF publishes a public threat report summarizing the landscape, naming techniques, and — where attribution is high-confidence and legally cleared — naming actors. The first report is due July 2026.
  • Single point of contact with US agencies. A designated FMF liaison handles coordination with the FBI, CISA and the Bureau of Industry and Security, so that the three labs do not duplicate reporting or contradict each other on attribution.
Frontier Model Forum secured threat-sharing channel — OpenAI Anthropic Google 48-hour notification SLA
The operational stack — secured FMF channel, 48-hour notification SLA, quarterly public reports, single point of contact with US agencies.

The open governance questions are non-trivial. Who owns the pooled data? What happens when one signatory's incident involves another signatory's infrastructure — for example, if an Anthropic investigation implicates a Google Cloud customer? How is the single-point-of-contact chosen, and who audits it? The FMF has indicated that these questions will be addressed in a governance charter to be published before the first quarterly report. Until that charter lands, the pact is more a statement of intent than a fully operational framework.

Impact on DeepSeek, Qwen and other Chinese labs

A fair question: does this pact actually affect Chinese AI labs, or is it a domestic US signaling exercise? The honest answer is both, and the two effects run on different time horizons.

In the short term, the pact does not change what DeepSeek, Alibaba's Qwen team, Moonshot's Kimi team or any other Chinese frontier lab can publish. Those labs are on their own research trajectories, their open-weight models ship on their own timelines, and the quality of their work has been steadily improving on its own merits. DeepSeek V3.2 and Qwen 3.5 did not happen because of stolen weights from US labs — they happened because Chinese labs invested heavily in original research, compute access, and a genuinely strong applied-ML talent pool. Pretending otherwise would be both inaccurate and strategically naive.

Over a longer horizon, the pact does three things that matter. First, it raises the cost of any marginal capability lift from espionage — if exfiltrating a US frontier model's training data or weights becomes significantly harder because of coordinated defenses, then the value of espionage as a complement to original research goes down. Second, it creates a public attribution mechanism through the quarterly reports, which can be used as an input to US and allied export-control and sanctions decisions. Third, it tightens the feedback loop between US frontier labs and US law enforcement, which means insider cases like US v. Ding are likely to move faster and to result in more coordinated responses than in the past.

What the pact is not is a containment strategy for Chinese AI capability. Containment was not achievable on the compute side (DeepSeek trained world-class models under export controls) and it is not achievable on the research side (Chinese labs now publish original frontier research). The pact is a defense-in-depth measure — a way to make sure that whatever gap exists between US and Chinese capabilities is not shrunk by the leakiest channel, which is exfiltration rather than competition.

The Trump administration's position

The Trump administration has been publicly supportive of the pact, though the supportive framing is narrower than the labs would like. In a statement from the Department of Commerce on April 6, Secretary Howard Lutnick called the agreement "a welcome example of private-sector leadership on national-security coordination" and said the administration would "work closely with the Frontier Model Forum as the primary industry interlocutor on AI espionage matters."

What the administration did not do is bless the labs' framing of the pact as a fully voluntary measure. Separate reporting indicates that an executive order currently under drafting at the White House would — if issued — impose mandatory reporting and disclosure obligations on frontier AI labs for espionage-related incidents, backed by Department of Commerce enforcement authority. The FMF pact is best read as the labs' attempt to set the terms of that regulation before it is handed to them. Whether the administration accepts the FMF framework as a substitute for a mandatory regime is not yet clear. The signals from the White House Office of Science and Technology Policy suggest the administration wants both — voluntary coordination and a legal backstop.

Congress is a separate track. Several bipartisan bills introduced in 2025 would create a statutory framework for AI industrial-espionage reporting, modeled on the Cybersecurity Information Sharing Act of 2015. Those bills are not on a fast legislative track, but the FMF pact gives them renewed political momentum.

China's reaction

Beijing's initial response, through a Ministry of Foreign Affairs briefing on April 7, was dismissive. A ministry spokesperson called the pact "another example of US companies politicizing normal commercial competition" and characterized the announcement as "protectionism disguised as national security." That is a standard framing and it was expected. What is more interesting is what the response did not contain: there was no specific rebuttal of the US v. Ding case, no denial of the underlying espionage activity, and no proposed alternative coordination mechanism.

Internally, Chinese AI labs and their state backers face a genuine strategic question: does it serve their interests to publicly dispute a pact that explicitly frames espionage as a deviation from legitimate competition? The Chinese frontier labs that matter — DeepSeek, Qwen, Kimi, Zhipu — have been building their reputations on original research and open-weight releases. Tying themselves publicly to a dispute over espionage tradecraft would damage that reputation more than it would help. Expect those labs, and their parent organizations, to stay quiet while the Ministry of Foreign Affairs handles the political framing.

What this means for the US-China AI race

Stepping back, the April 6 pact is a small, technically narrow agreement that carries a disproportionately large signal. The signal is that the three top US frontier labs — commercial rivals who agree on almost nothing else — have decided that Chinese espionage risk is significant enough to overcome their competitive instincts. That is a meaningful update on how US AI leadership views the adversary landscape in 2026.

US China AI race geopolitics — April 6 2026 Frontier Model Forum espionage pact inflection point
April 6, 2026 — the first time the top three US frontier labs formally align on a single adversary, and a quiet inflection point in the US-China AI race.

It is also a template. If the FMF pact is judged successful by its members and by the administration, the next obvious extensions are: adding Microsoft, adding Meta's Superintelligence Labs, adding Amazon, and formalizing coordination with allied-country AI organizations in the UK, Canada, Japan and the EU. None of those extensions are announced. All of them are plausible within twelve months if the first quarterly report demonstrates real operational value.

The race between US and Chinese AI labs is not going to be decided by espionage defense. It is going to be decided by compute, data, talent and research quality — and on most of those axes the gap is narrower than US policy hawks admit. But espionage defense is the first place where a gap can be prevented from shrinking the wrong way. April 6, 2026 is the day the three labs stopped pretending that was somebody else's problem.

Our analysis

The pact is overdue. For the last two years the case for coordinated threat sharing among US frontier labs has been obvious to anybody reading court records and threat-intelligence feeds, and the main reason it did not happen sooner was commercial friction between Altman, Amodei and Hassabis. That friction has not gone away — it has simply been overridden by a more immediate concern. That is a healthy reprioritization, and it is the kind of industry behavior that usually only happens when a regulatory threat is credible.

The pact is also narrower than its announcement language implies. A 48-hour notification SLA is useful; a rotating working group is useful; a public quarterly report is useful. None of those, individually or collectively, is a substitute for the kind of deep operational integration that serious national-security coordination requires. The FMF does not have classified data handling, it does not have cleared personnel in the national-security sense, and it does not have authority to coordinate responses. It is a civilian industry body pretending, briefly, to be a defense perimeter. That is fine as a starting position. It is not a finished product.

The most important thing to watch is the first quarterly report in July 2026. If it names techniques and actors with specificity, if it is aligned with FBI and CISA attribution, and if it demonstrably reflects pooled data from all three signatories, then the pact is working. If it reads like a sanitized marketing document, then the pact is a signaling exercise and Washington will likely move to replace it with a mandatory framework by the end of the year. We will be watching closely.

Frequently asked questions

What is the Frontier Model Forum espionage pact?

The Frontier Model Forum espionage pact is an agreement announced April 6, 2026 in which OpenAI, Anthropic and Alphabet (Google) committed to share intelligence on Chinese industrial espionage targeting US AI labs. The pact commits the three signatories to four obligations: real-time threat sharing with a 48-hour notification SLA, joint forensic analysis through a rotating technical working group, public quarterly threat reports starting July 2026, and coordinated response with US agencies through a single FMF point of contact with the FBI, CISA and the Department of Commerce.

What is the Frontier Model Forum?

The Frontier Model Forum (FMF) is an industry body launched in July 2023 by Anthropic, Google, Microsoft and OpenAI. Its original mission covered AI safety research, responsible deployment best practices, and information sharing with policymakers and civil society. For its first two years the FMF was a policy-and-papers organization. The April 6, 2026 espionage pact is the first time it has been used as an operational threat-intelligence clearinghouse, marking a significant mission expansion from policy convening to working defense perimeter.

Why are OpenAI, Anthropic and Google suddenly cooperating?

Three forces converged in Q1 2026. First, confirmed espionage incidents — including the 2024 arrest of former Google engineer Linwei Ding for stealing AI chip designs, attempted insider recruitment at signatory labs, and targeted phishing campaigns against frontier-lab researchers — moved the threat from hypothetical to operational. Second, the Trump administration signaled that mandatory reporting obligations would be imposed if labs did not self-coordinate. Third, the rapid improvement of Chinese open-weight models like DeepSeek V3.2 and Qwen 3.5 raised pressure to defend exfiltration channels. Cooperation became cheaper than non-cooperation.

What is the Linwei Ding case?

Linwei Ding is a former Google software engineer arrested in March 2024 and indicted on charges of stealing over 500 confidential files related to Google's AI chip and supercomputer infrastructure while secretly affiliated with two China-based technology companies. The case — United States v. Linwei Ding — became the public template US lab security teams now use for insider-threat scenarios, and it is widely cited as one of the concrete incidents that motivated the Frontier Model Forum espionage pact two years later.

How does the information sharing work operationally?

Each signatory designates cleared staff with access to a shared FMF threat-intelligence platform that uses standard industry protocols like STIX/TAXII indicator exchange. Credible incidents must be reported within 48 hours. A rotating technical working group with chairs from all three labs analyzes pooled data on six-month terms. Every quarter the FMF publishes a public threat report naming techniques and, where high-confidence attribution is legally cleared, naming actors. A single FMF liaison handles coordination with the FBI, CISA and the Department of Commerce's Bureau of Industry and Security.

Does the pact affect Chinese AI labs like DeepSeek and Qwen?

Not directly or in the short term. DeepSeek, Qwen, Kimi and other Chinese frontier labs are on their own research trajectories and their open-weight releases are the product of original research, compute access and strong applied-ML talent. Over a longer horizon the pact raises the marginal cost of exfiltration-as-complement-to-research, creates a public attribution mechanism that can feed into export-control decisions, and tightens the feedback loop between US frontier labs and US law enforcement. It is a defense-in-depth measure, not a containment strategy for Chinese AI capability.

Why is Microsoft not a signatory?

Microsoft co-founded the Frontier Model Forum in 2023 alongside Anthropic, Google and OpenAI but is not a signatory to the April 6, 2026 espionage pact. Two factors explain the absence. First, Microsoft's commercial relationship with OpenAI already includes confidentiality and incident-sharing protocols, so there is overlap with what the new pact provides. Second, Microsoft operates its own threat-intelligence group (Microsoft Threat Intelligence) which publishes China-focused reports directly and already coordinates with US agencies through separate channels. Microsoft has not ruled out joining the pact later.

What is the Trump administration's position on the pact?

The Trump administration publicly welcomed the pact through an April 6 Department of Commerce statement from Secretary Howard Lutnick calling it "a welcome example of private-sector leadership on national-security coordination." However, the administration has not accepted the pact as a substitute for a mandatory regime: an executive order reportedly under drafting at the White House would impose mandatory reporting obligations on frontier labs for espionage-related incidents. The FMF pact is best understood as the labs' attempt to set the terms of that regulation before it is imposed.

Related Articles

Was this review helpful?
Anthony M. — Founder & Lead Reviewer
Anthony M.Verified Builder

We're developers and SaaS builders who use these tools daily in production. Every review comes from hands-on experience building real products — DealPropFirm, ThePlanetIndicator, PropFirmsCodes, and many more. We don't just review tools — we build and ship with them every day.

Written and tested by developers who build with these tools daily.