Website cannot work Properly with out javascript please enable javascript

logo

Anthropic’s Ban by the US Govt. and Pentagon - the real Picture Who is trying to hide what?

Updated: 09 Mar 2026

Anthropic’s Ban by the US Govt. and Pentagon - the real Picture Who is trying to hide what?


Is it too late behind, with one company standing guard as the mask of AI is plummeting, and the peril is getting out of hand with a “License to Kill,” as predicted by a few and once it is done ,cannot be undone? The US Congress is too slow to wake up to the emergency as AI becomes more powerful than the people who created it.

AI Decoded: With power concentrated in a few hands, humanity's fate hangs in the balance as accountability is thrown out the door.

On Saturday afternoon, 28th Feb 2026, at 1:47 PM, the world woke up to a post published from the official handle of Donald J. Trump on his social platform Truth. The post stated that the United States of America will not work with a radical and Woke company that puts American Lives at risk. The post was directed to the Anthropic company after the fallout from the ongoing $200 million deal with the United States government which allows the usage of  AI by its departments for military purposes. The post was published roughly one hour before the Pentagon’s deadline for Anthropic to lift all restrictions on the military's use of its AI with immediate effect. As per the reports, the US government demanded unrestricted access to Anthropic’s Claude AI for their military warfare operations, including mass domestic surveillance of the citizens and fully autonomous weapons, which will allegedly use Claude AI to create lethal weapons that can operate without human intervention and run autonomously.

According to the post, tweeted by the president, all federal departments in the USA are ordered to cease using any AI infrastructure or systems from Anthropic in their military operations. However, just hours after the announcement, Anthropic’s AI system was utilised in Iran’s airstrike for intelligence and target identification. Previously, the same technology was also employed in the high-profile capture of Nicolas Maduro in Venezuela on 14th Feb, 2026, highlighting how deeply modern LLMs have been integrated into real-world military actions, even amidst political disputes over their use.

It was interesting, however, that Centcom, the Department of Defence , head office situated at Florida  and responsible for monitoring and handling the security and military issues of central and parts of South Asia, had ‘no comments’ on whether or not Anthropic is serving the ongoing Iranian war while the current article was being drafted. Though reports from around the world and an article published in The Wall Street Journal on 28th Feb titled ‘U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban’ by Marcus , Amrith, and Shelby are to be believed, Anthropic has been integrated by the US command. 

It's surprising that, although the president of the state and head of defence commented on the immediate withdrawal of Anthropic from its services and mentioned punitive actions such as ‘supply chain blacklisting’ usually employed against enemy countries, they still relied on Claude for operations in Iran. This highlights how deeply AI systems are embedded in modern warfare, and now, as per the report, a minimum of 6 months would be needed to withdraw. The war assessment after five days reveals that, from pinpointing high-value targets and conducting real-time battlefield simulations to the bombing of the Iranian leader and 40+ high-profile members, it was all the work of Claude - although it should be noted that the CIA had been tracking their movements for some time. Nonetheless, it clearly demonstrates Claude’s calibre and effectiveness.

The US head of defence, Pete Hegseth, was found quoting in an official memo on military dominance-  “ must also utilise models free from usage policy constraints that limit lawful military applications’He was further found quoting that they are done running science exhibitions where adversaries were running an arms race. The Secretary of Defence, Pete Hegseth, has also labelled Anthropic as “Supply chain risk”, putting a nationwide federal ban on the company, which implies no government agency or government contractor can use Anthropic technology, including the famous LLM Claude AI, in their operations.

However, the crackdown also highlights a concerning trend- while Anthropic has refused to provide unrestricted access to its LLMs to the American government for using them for military warfare, and observation of American citizens putting them  under surveillance- other leading tech giants, such as OpenAI , Grok,and Google, have openly admitted to supporting the Department of Warfare to provide unrestricted access to their LLMs to support the country.

OpenAI, the founder of ChatGPT, while praising Anthropic’s safety measures, announced a major Pentagon deal hours after Anthropic’s ban, positioning OpenAI as the immediate beneficiary of the U.S. government’s pivot away from its rival and former friend. Google, too, has long-standing agreements with the Department of Defence worth $200 million for advanced AI deployments. Both companies have agreed to provide unrestricted access to their AI models for military purposes, prioritising “any sort of lawful use” over strict ethical use, as Dario Amodei, CEO of Anthropic, tried but failed to do.  

The most concerning part of the entire deal is how informally  it has been discussed on open platforms, in media outlets, in news reports, and by the Department of Defence. This moment demands serious consideration, which is nowhere to be found in the agendas promoted by big tech firms or the Department of Defence in America. While more than 200 employees of Google and OpenAI are openly criticising the move to support warfare tech through an open letter, both companies continue to do so. Today’s LLMs remain in a very early and immature stage of development. The International AI Safety Report of 2026 notes that while the capabilities of AI models have improved over the past five years in the field of mathematics, coding, and basic autonomous operations, their performance is still jagged and the models are repeatedly found to be hallucinating, which means they are unable to operate on their own. They lack the consistent reliability needed for high stakes decision, such as warfare.

Anthropic CEO Dario Amodi has repeatedly warned us that frontier AI systems are simply not reliable enough to power a fully autonomous operation, let alone to be used as an autonomous weapon, because they cannot yet make the judgment of a trained military officer. He believes that till date no AI has shown the capability and is not prepared to handle such requirements. He further states that only 1% of such experiments has shown some success, but using unlawful means. He thus wanted the government to reconsider the same as there was no coming back from it once done and they could focus on the 99% of teh successful ways while following the society principles.  There is also no chain of command for robots; it is simply a risk to troops and civilians alike.

Using LLMS to create autonomous weapons grants AI models a “license to kill”, which is a non-reversible step. Once any country or any company removes human oversight and grants unrestricted access to AI, there is no going back. The consequences will be lethal, and there will be erosion of moral accountability, which would change warfare forever.

The current American drama, which unfolded in public, countries such as China are already far down this road, often out of the global spotlight. Beijing has aggressively integrated AI into its military operations and has demonstrated missile-armed robot dogs and animal-inspired drone swarms to autonomous underwater vehicles and satellites. Such developed systems already proceed with far fewer ethical constraints. We are at a tipping point, where one wrong move can endanger the entire human race. In the end, humanity must remain- that is the only hope. As major tech companies’ decisions determine the fate of humanity, they must make them with greater responsibility. Rather than running after profitability and ego-clashes, the implications of their decisions are far more concerning and graver. The leadership in this era demands not just technical expertise, but also preserving human judgment and moral decision-making that prevents error-prone AI autonomy.

 As we conclude the article, there has been a 250% drop in daily ChatGPT users and over 50% growth in Anthropic, including us - ‘the team of Academic Mantra services’ — signing a deal with Claude for its ongoing projects on 4th March 2026. Although the deal was made after the aftermath of the Trump and Anthropic termination saga, it definitely deserves applause for creating a solid scientific and functional model for businesses. We can assert that ‘Claude’ is currently the best available, even in terms of safety, as highlighted in our earlier article titled ‘                            .’ It takes great courage to stand up as Dario did, and the US Congress and the world must learn from this and intervene sooner rather than later. Sometimes, one person’s action is all it takes. If not everyone, a few might muster the courage to add an extra bucket of water to their house in the fire — but to do so for humanity, even while one’s own house is burning, is what we need our future to embody.

As we ponder a thought for all of us:

1. Where are we headed as humanity when autonomous Machines are being trained to destroy humanity, when heads of state and countries who are supposed to be the custodians are the destroyers?

2. Was this another of the many marketing gimmicks we have come to expect from the US President, Mr Donald Trump, and his administration in marginalising Anthropic and opening the door for the almost bankrupt OpenAI to survive somehow? 

AI as war-mediator. 

AI becomes more powerful than its creators- and we are the ones who created it.            

 

Drafted By:

Academic Mantra Services

Creative Team of Academic Mantra Services

Disclaimer: all content and intellectual property remain the exclusive property of Academic Mantra Services

Need Help

12800

Happy Parents

1000

Awards Won

15000

Best Teachers

10080

Successful Kids