China is chief among a group of nations seeking to steal US-made AI technology, and is doing so at “industrial-scale,” the White House alleged Thursday.
“The United States government has information indicating that foreign entities, principally based in China, are engaged in deliberate, industrial-scale campaigns to distill frontier AI systems,” Director of the White House Office of Science and Technology Policy Michael Kratsios said in a memo circulated to the heads of federal agencies.
“Leveraging tens of thousands of proxy accounts to evade detection and using jailbreaking techniques to expose proprietary information, these coordinated campaigns systematically extract capabilities from American AI models, exploiting American expertise and innovation,” he added.
Kratsios said models derived in that manner lack the full functionality of the original, but said they nonetheless allow “foreign actors to release products that appear to perform comparably on select benchmarks at a fraction of the cost.” They also allow for safeguards and other rules to be stripped from the models, he said.
The presidential advisor said in response that the Trump administration will share with American businesses the tactics it detected to carry out industrial espionage, as well as the entities involved, and will coordinate with companies to counter such efforts.
It will also “explore a range of measures to hold foreign actors accountable for industrial-scale distillation campaigns.”
Anthropic, the company behind AI model Claude, previously announced a new multi-company initiative to combat AI theft. Project Glasswing brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks to help secure AI technologies.
“As part of Project Glasswing, the launch partners listed above will use Mythos Preview as part of their defensive security work; Anthropic will share what we learn so the whole industry can benefit,” it said.
Anthropic was referring to an AI model that it has yet to broadly release, but which has reportedly proven to outperform all but the most skilled human coders in detecting software vulnerabilities.




