Tech · 6 min read

Google Shrugs Off Staff Revolt and Declares Itself ‘Proud’ to Build AI for Trump’s Pentagon

Google brushes aside a 1,000-strong staff letter and says it’s ‘proud’ to build classified AI for Trump’s Pentagon. Here’s what the deal really involves.

Google Shrugs Off Staff Revolt and Declares Itself ‘Proud’ to Build AI for Trump’s Pentagon

Google has a new favourite word, and it is not ‘don’t’. The company that once promised not to be evil has waved away an open letter from hundreds of its own staff and announced it is ‘proud’ to be doing classified AI work for Donald Trump’s Pentagon. So much for reading the room.

According to a report by The Independent, Google leadership has dismissed concerns raised by employees about the company’s deepening relationship with the United States Department of Defense. The deal hands the Pentagon access to Google’s AI models for classified military work, and the brass on the top floor could not be more chuffed about it.

What the deal actually is

Google is one of eight tech firms cleared to plug their AI into the Pentagon’s most secretive networks. The full list, for the avoidance of doubt, is Google, Microsoft, Amazon Web Services, OpenAI, Nvidia, Oracle, SpaceX and Reflection. Anthropic, the safety-focused outfit behind Claude, was pointedly left out after being slapped with a ‘supply chain risk’ label, a designation it is now contesting in court.

The contract itself reportedly allows the Pentagon to use Google AI for ‘any lawful purpose’, with carve-outs that supposedly rule out autonomous weapons and mass surveillance. Cynics will note that ‘any lawful purpose’ is doing rather a lot of heavy lifting in that sentence.

The staff letter that landed with a thud

The internal pushback was not exactly small. An open letter to chief executive Sundar Pichai initially gathered more than 580 signatures, including over 20 directors and vice presidents, plus senior researchers at Google DeepMind. That number reportedly grew to somewhere between 950 and 1,000 as word spread.

The signatories warned about the ‘unethical and dangerous uses’ of military AI and urged leadership to refuse classified defence contracts. One DeepMind researcher, Andreas Kirsch, took to X to say he was ‘speechless’ and called the deal ‘shameful’.

Google’s response, in essence, was a polite shrug.

This is not Google’s first staff rebellion

If any of this feels familiar, that is because it is. Back in 2018, roughly 4,000 employees signed a petition against Project Maven, the Pentagon programme that used Google’s AI to analyse drone footage. At least a dozen staff quit. Google blinked, declined to renew the contract, and quietly let it expire in March 2019.

Then came a soft-pedalled set of AI principles promising the company would not build weapons or tools for surveillance that violated international norms. It was a moment of corporate conscience, or at least a convincing impression of one.

That moment did not survive a second Trump administration. Shortly after the 2024 election result, Google revised those AI principles and quietly removed the explicit ban on weapons work. The fig leaf, it turns out, was machine-washable.

Why the U-turn now?

The honest answer is money and momentum. Google has already deployed its Gemini AI to roughly three million Pentagon personnel and holds a chunk of the $9 billion Joint Warfighting Cloud Capability contract. Walking away from classified work would mean leaving an enormous bag of money on the table while Microsoft, Amazon and OpenAI happily picked it up.

Pentagon AI chief Cameron Stanley told CNBC that depending on a single AI model is ‘never a good thing’, dressing up the multi-vendor approach as a strategic hedge. Translation: spread the contracts, spread the dependency, and keep every Big Tech chief executive on a friendly first-name basis.

Selective ethics, or just selective PR?

Here is where it gets interesting. According to Tom’s Hardware, Google has simultaneously walked away from a separate $100 million drone swarm programme. So the company is willing to draw a line, just not at classified AI for the Pentagon.

That suggests two things. Either Google has a precise, considered ethical framework that it has chosen not to share with the public, or it is making it up as it goes along based on which contracts attract the worst headlines. Take a guess which feels more plausible.

Why this matters to readers in the UK

If you live in Britain, the temptation is to file this under ‘American problem’ and move on. Resist it. The AI models powering classified U.S. military systems are the same general-purpose models being woven into your inbox, your phone, your workplace and increasingly your child’s homework.

The boundary between consumer AI and military AI is a policy choice, not a technical one. When that boundary moves, it moves for everyone. UK defence procurement also leans heavily on the same American suppliers, which means decisions taken in Mountain View have a way of trickling into Whitehall.

There is also the precedent question. Once a company has crossed a line and discovered the share price did not collapse, the line tends to stay crossed. Today’s ‘any lawful purpose’ becomes tomorrow’s baseline.

Anthropic and the awkward outsider

Sitting outside the tent is Anthropic, which reportedly refused the Pentagon’s contract terms and was promptly designated a supply chain risk. There are unverified claims that the U.S. military has nonetheless used Anthropic’s Claude in connection with the Iran conflict, though that has not been independently corroborated and should be treated with caution.

Trump himself has publicly hinted that Anthropic is ‘shaping up’, which is the political equivalent of tapping a watch. Whether the company holds firm or quietly negotiates its way back in is the subplot worth watching.

What to take from all this

Google’s message to its workforce is plain enough. The era when a well-organised internal letter could change the company’s direction is over. Project Maven was 2018. The new playbook is to thank staff for their feedback, update the AI principles page, and sign the contract anyway.

For users, the practical takeaway is to stop assuming that the Big Tech firm whose products you use every day shares your values, or even has stable ones. Their values are whatever the current political climate and quarterly earnings call require them to be.

For the staff who signed the letter, the harder question is what to do next. History suggests resignations, leaks and public pressure are the only tools that have ever moved Google. A strongly worded memo, sadly, is not on the list.

Read the original article at source.

D
Written by

Daniel Benson

Writer, editor, and the entire staff of SignalDaily. Spent years in tech before deciding the news needed fewer press releases and more straight talk. Covers AI, technology, sport and world events — always with context, sometimes with sarcasm. No ads, no paywalls, no patience for clickbait. Based in the UK.