ADVERTISEMENT

Are you ready for weapons that call their own shots?

Scharre, author of “Army of None: Autonomous Weapons and the Future of War,” recounted this episode in a speech this year at Stanford’s Center for International Security and Cooperation, laying out the stakes as the artificial intelligence revolution spreads further onto the battlefield.

Are you ready for weapons that call their own shots?

If she had come into the sights of the kind of autonomous robot or drone now under development, rather than of trained snipers, it might not have made the distinction between target and child, and killed her, according to Paul Scharre, who was leading the Rangers that day.

Scharre, author of “Army of None: Autonomous Weapons and the Future of War,” recounted this episode in a speech this year at Stanford’s Center for International Security and Cooperation, laying out the stakes as the artificial intelligence revolution spreads further onto the battlefield.

“How would you design a robot to know the difference between what is legal and what is right?” he asked. “And how would you even begin to write down those rules ahead of time? What if you didn’t have a human there, to interpret these, to bring that whole set of human values to those decisions?”

ADVERTISEMENT

For now, these are hypothetical questions. Two senior Pentagon officials, who spoke to The Times on background because much of their work on artificial intelligence is classified, say the United States is “not even close” to fielding a completely autonomous weapon.

But three years ago, Azerbaijani forces used what appeared to be an Israeli-made kamikaze drone called a Harop to blow up a bus carrying Armenian soldiers. The drone can automatically fly to a site, find a target, dive down and detonate, according to the manufacturer. For now, it is designed to have human controllers who can stop it.

Not long after that in California, the Pentagon’s Strategic Capabilities Office tested 103 unarmed Perdix drones which, on their own, were able to swarm around a target. “They are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” the office’s director at the time, William Roper, said in a Defense Department statement.

The Estonian-made Tracked Hybrid Modular Infantry System looks like a mini-tank with guns, camera and the ability to track down and fire on a target. It’s human controlled, but its maker says it’s “autonomous and self-driving with growing AI assistance.”

“It is well within the means of most capable militaries to build at least relatively crude and simple autonomous weapons today,” Scharre said in an email exchange after his speech.

ADVERTISEMENT

As the ability of systems to act autonomously increases, those who study the dangers of such weapons, including the U.N. Group of Governmental Experts, fear that military planners may be tempted to eliminate human controls altogether. A treaty has been proposed to prohibit these self-directed lethal weapons, but it’s gotten limited support.

The proposed ban competes with the growing acceptance of this technology, with at least 30 countries having automated air and missile defense systems that can identify approaching threats and attack them on their own, unless a human supervisor stops the response.

The Times of Israel has reported that an Israeli armed robotic vehicle called the Guardium has been used on the Gaza border. The U.S. Navy has tested and retired an aircraft that could autonomously take off and land on an aircraft carrier and refuel in midair.

Britain, France, Russia, China and Israel are also said to be developing experimental autonomous stealth combat drones to operate in an enemy’s heavily defended airspace.

The speed with which the technology is advancing raises fears of an autonomous weapons arms race with China and Russia, making it more urgent that nations work together to establish controls so humans never completely surrender life-and-death choices in combat to machines.

ADVERTISEMENT

The senior Pentagon officials who spoke to The Times say critics are unduly alarmed and insist the military will act responsibly.

“Free-will robots is not where any of us are thinking about this right now,” one official told me.

What they are exploring is using artificial intelligence to enable weapons to attack more quickly and accurately, provide more information about chaotic battlefields and give early warning of attacks. Rather than increase the risk of civilian casualties, such advances could reduce such deaths by overcoming human error, officials say.

The United States, for instance, is exploring using swarms of autonomous small boats to repulse threats to larger Navy ships. Yet Pentagon officials say U.S. commanders would never accept fully autonomous systems, because it would mean surrendering the intelligence and experience of highly trained officers to machines.

While artificial intelligence has proved a powerful tool in numerous fields, it has serious vulnerabilities, like computer hacks, data breaches and the possibility that humans could lose control of the algorithm.

ADVERTISEMENT

Only 28 countries have supported the call for a treaty banning such weapons, which the Campaign to Stop Killer Robots, an international coalition of more than 100 nongovernmental organizations, began working for in 2012. In March, U.N. Secretary-General António Guterres said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” He called on a U.N. group to develop means, by treaty, political pressure or strict guidelines, to make that happen.

That work is futile unless the United States and other major powers lead the way.

This article originally appeared in The New York Times.

JOIN OUR PULSE COMMUNITY!

Unblock notifications in browser settings.
ADVERTISEMENT

Eyewitness? Submit your stories now via social or:

Email: eyewitness@pulse.com.gh

ADVERTISEMENT