When Google Burned a U.S.-Allied Counterterrorism Operation poppopret.org

Yesterday, in responding to a Google profile of DRAGONBRIDGE, a Chinese state-affiliated disinformation campaign, I wrote that I hoped Google would do the same if it were a U.S.-allied effort it had found instead — forgetting that Google had already done so, and in a far more complicated circumstance.

Michael Coppola:

In January 2021, Google’s Project Zero published a series of blog posts coined the In the Wild Series. Written in conjunction with Threat Analysis Group (TAG), this report detailed a set of zero-day vulnerabilities being actively exploited in the wild by a government actor.

[…]

What the Google teams omitted was that they had in fact exposed a nine-month-long counterterrorism operation being conducted by a U.S.-allied Western government, and through their actions, Project Zero and TAG had unilaterally destroyed the capabilities and shut down the operation.

This is not the only example cited by Coppola; there are many in this post.

When an exploit chain is discovered, there is a very easy situation — technically: Google did the right thing by finding and exposing these vulnerabilities, no matter how they were being used. But doing so is politically and ethically fraught if those vulnerabilities are being used by state actors.

Patrick Howell O’Neill, reporting for MIT Technology Review in March 2021:

It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.

As far as I know, the U.S. ally was never revealed nor were the specific targets. Google’s revelation could have had catastrophic consequences, as Coppola speculates. But it is also true that not revealing known exploits to software vendors can have severe outcomes, as we learned with WannaCry. The risk of exposing the use of vulnerabilities is variable; the risk of not reporting them is fixed and known: they will be found by or released to people who should never have access to them.