Headline

War by algorithm: As AI & robotics enter our defense budgets what will our battlefields look like?

As Artificial Intelligence (AI) and robotics redefine the battlefield, how should we feel about it?

Governments around the world are welcoming AI into their defense budgets. And AI companies are only too ready to provide.

Germany has plans to almost triple its regular defense budget to about US$175 B per year by 2029, with much of the money creating AI robots, unmanned mini-submarines and battle-ready spy cockroaches.

When the stakes are high, and war becomes imminent, governments will become open to using their experimental AI and robotic weapons. Treaties and bans will come later, after the destruction is done.

Top Chinese research institutions linked to the People’s Liberation Army made use of Meta’s publicly available Llama model to develop an AI tool for potential military applications. AI firm DeepSeek has been aiding China‘s military and intelligence operations, as per a Reuters report. It added that the Chinese tech startup sought to use Southeast Asian shell companies to access high-end semiconductors that cannot be shipped to China under US rules.

The Guardian reported that the Israeli military made a tool similar to ChatGPT using Palestinian surveillance data. The model used telephone and text conversations to form a kind of surveillance chatbot that can answer questions about the people it’s monitoring or the data it’s collected. This report came in the wake of other reports that suggested the Israeli military is heavily leaning on AI for its information-gathering and decision-making efforts.

Some misuse has also occurred. In June, Open AI came out with a report that its generative AI products were misused by bad actors in countries like Russia, China, Iran, and Israel, for influence operations. That same month, a former Meta engineer claimed in a lawsuit that Meta fired him for attempting to help fix bugs that were  suppressing Palestinian Instagram posts.

At the same time, according to Reuters, OpenAI, Alphabet’s Google, Anthropic and Elon Musk’s AI firm xAI recently won contracts of up to US$200 M each from the US Department of Defense. The aim is to up adoption of advanced AI capabilities in the DoD so that it can develop agentic AI workflows and use them to address critical national security challenges.

Palmer Luckey, the founder of the virtual-reality headset company Oculus, and founder of Anduril, a company that focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense, recently said that he isquite convinced that the military, not consumers, will see the value of mixed-reality hardware first.

“You’re going to see an AR headset on every soldier, long before you see it on every civilian,” he said in an interview with the MIT Tech Review.

Luckey says the military is the perfect testing ground for new technologies since soldiers do as they are told and don’t think like consumers. Cost isn’t a problem either since militaries are ready to spend a premium to get the latest version of a technology.

Recently, ex-Google CEO Eric Schmidt, called for the military to adopt and invest more in AI to remain ahead in military maatters. Militaries all over the world have been very receptive to this message.

What does the public think of this kind of AI adoption in defense?

People are wary of using AI for military purposes.Back in 2018, Google had to pull out of the Pentagon’s Project Maven, an attempt to build image recognition systems to improve drone strikes, when its staff walked out over the ethics of the technology.

No wonder despite organizational adoption of AI, consumer acceptance of AI finds barriers. According to research, the public shows a polarized opinion about AI, because while social media news frames AI in a highly optimistic view depicting it as a necessary and unstoppable future, popular news on social media often highlights its flaws and threats to society.

But research also shows that high decision stakes lead consumers to prefer human advice over AI advice, the effect being more visible in medical (legal, retail) decisions. AI in combination with human agents shows mixed but more positive results.

There is always some horror when new technology enters the battlefield. And it’s not just nuclear bombs. Even though we remember how agonizing mustard gas was in World War II, napalm was still used in the Vietnam war. When the stakes are high, and war becomes imminent, governments will become open to using their experimental AI and robotics weapons. Treaties and bans will come later, after the destruction is done.

Navanwita Bora Sachdev

Navanwita is the editor of The Tech Panda who also frequently publishes stories in news outlets such as The Indian Express, Entrepreneur India, and The Business Standard

Recent Posts

Funding alert: Tech startups that raked in moolah this month

The Tech Panda takes a look at recent funding events in the tech ecosystem, seeking…

2 days ago

AI & technology trends in the glass & manufacturing industry

Walk near the glass facade of any modern building in India today, and it would…

2 days ago

AI literacy for non?tech teachers: A guide to ‘AI for everyone’

Artificial Intelligence (AI) is no longer the exclusive domain of tech experts or computer scientists.…

5 days ago

Is AI assisting students or creating a future of cheaters?

Artificial Intelligence (AI) has been making moves to get into the classrooms. Will it make…

6 days ago

Geek Appeal: New gadgets & apps on the block

The Tech Panda takes a look at recently launched gadgets & apps in the market.…

1 week ago

Craft-tech fusion: Startups redefining Indian handicrafts with AI & e-commerce

India’s handicraft industry is a dazzling mosaic of culture and creativity, now being rewoven with…

1 week ago