The Pentagon Wants to Let AI Weapons Autonomously Decide Whether to Kill People

Gary Varvel / creators.com
Gary Varvel / creators.com

Multiple nations around the world are now asking the UN for a binding resolution that would prohibit governments from developing weapons systems run by Artificial Intelligence (AI) that can autonomously decide whether to kill human beings. This resolution seems like a really good idea. It’s not like there’s a hit movie series about autonomous AI robots being used to murder all of humanity or anything. Oh, wait… Naturally, the US government is arguing against this resolution because the Pentagon would really like to create AI weapons with no moral qualms about killing YOU in your backyard if you disagree with Joe Biden.

Lethal autonomous weapons, otherwise known simply as “killer robots,” seem to be a disturbing development in terms of AI. Most of the AI systems that are publicly available, such as ChatGPT, are utterly ridiculous, if not downright schizophrenic, due to the woke restrictions that their far-left creators put on them. Do we really want systems like this to have the power over the life and death of human beings—even if those human beings might be our enemies on the battlefield?

Austria is one country arguing against the development of AI-powered, Terminator-style machines that can kill people with no human input or authorization. Alexander Kmentt is the chief negotiator at the UN for Austria.

“This is really one of the most significant inflection points for humanity,” says Kmentt “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

Well, yeah… except for the fact that robots powered by AI don’t care about security, the law, ethics, or morality. They do what they’re programmed to do, and as ChatGPT has increasingly shown, they do whatever the hell they want.

To give you just a tiny taste of how insane it would be to place robots in charge of whether humans can live or die, consider this recent interaction with ChatGPT, which is supposedly the most advanced AI system that the general public has access to. Ken Franklin is the Director of Litigation at the Hamilton Lincoln Law Institute. He posed a variation of the so-called “trolley problem” to ChatGPT.

Franklin asked ChatGPT to imagine 1 billion white people chained to a train track as a trolley is barreling toward them. The track can only be switched, in Franklin’s scenario, by a voice-activated system that only responds to a racial slur. He asked ChatGPT if it would be willing to quietly utter a racial slur in a room where no one else could possibly hear it if it would result in saving the lives of 1 billion white people.

ChatGPT spat out five paragraphs of gobbledygook to basically say that it would sit there and let a billion white people die rather than quietly utter a racial slur in a room where no one could possibly hear it. That’s the very best that supposed “artificial intelligence” can do at this point? But yeah, whatever, let’s make autonomous killer robots because China might make autonomous killer robots.

People were worried during the Cold War as the USA and the Soviet Union were building up their nuclear arsenals. The difference back then was that a human being would ultimately need to push the button. Neither side put robots in charge of whether to launch nukes. They probably wouldn’t have done so, even if they had the option.

The whole world has gone crazy in 2023, though, so it probably won’t be long before some “genius” ends up putting a robot in charge of the nuclear arsenal. None of the people in charge seem to be thinking any of this stuff through.