Robots on the Battlefield

There are many indications that battlebots will soon be a reality

Robots on the battlefield can be lethal and even repairable.  Their use is being debated at the UN, particularly since some will soon be given decision making capabilities. Currently deploying a robot in the field still relies on a soldier using remote control, and then only to a distance of about 800m (2,600ft).

But what happens when we eventually take humans out of the loop? Many critics worry that marrying advances in robotics, advanced weapons, microelectronics miniaturisation, with developments in neural network-based artificial intelligence (AI) could lead to a potential Terminator scenario.

 

Robot drones like the MAAR-system will be increasingly common on future battlefields. This is one example of how sci-fi movies like the Terminator predicted the future
The Modular Advanced Armed Robotic System (MAARS®) from QinetiQ North America is one such robot, an unmanned ground vehicle (UGV) that is powerful, modular and combat-ready.  The MAARS is designed expressly for reconnaissance, surveillance, and target acquisition (RSTA) missions to increase the security of personnel manning forward locations. MAARS can be positioned in remote areas where personnel are currently unable to monitor their security, and can also carry either a direct or indirect fire weapon system.

This miniature tank is only one metre long.  It is one of a host of unmanned air, sea and land vehicles that are being used by militaries to aid soldiers in reconnaissance or to go into heavily booby-trapped areas where it might be too risky to send in troops.

Agile and combat-ready, MAARS is a technological breakthrough. MAARS enables the remote emplacement of RSTA sensors into critical locations to provide early warning while enabling immediate response if required. Remotely controlled by an operator equipped with a lightweight, wearable control unit, MAARS features multiple onboard day and night cameras, motion detectors, an acoustic microphone, a hostile fire detection system, and a speaker system with a siren to provide optimum situational awareness and alarm.

 

‘Fire and forget’
Already a few autonomous weapons are able to decide whether or not to attack a target.  One such an application is Israel’s Harpy drone. This is what its’ manufacturers, IAI, call a “fire and forget” autonomous weapon – or a loitering munition. Once launched from a vehicle, the Harpy missile loiters over an area until it finds a suitable target, say an enemy radar. Crucially, once a radar is detected it is the drone itself that decides whether or not to attack. Harpy is only launched if its operators suspect there are enemy radars to be attacked.  However, automation like this is likely to become more common.

 

The problem is that an enemy tank looks a lot like a friendly tank.  Civilians and enemy soldiers also look pretty similar. Where do we draw the line?
The real obstacle to the more widespread use of what some call “killer robots” is getting them to tell friend from foe. “Militaries are not going to want to deploy something on the battlefield that might accidentally go against their own forces,” is a point echoed by General Larry James, the US Air Force’s deputy chief of staff for intelligence. “We are years and years away, if not decades, from having confidence in an artificial intelligence system that can do discrimination and make those decisions.”

This said, more than 90 countries now operate such systems, and the industry is estimated to be worth $98bn (£58bn) between 2014-23. The USA remains the prime market, but many countries are building up their indigenous unmanned system capabilities.

 

Unmanned air vehicles spending 2014-23

 

It is the United States that dominates the market for Unmanned Air Vehicles (UAVs) – over the next decade it is forecast to spend more than three times as much on UAVs as China.
UAVs also account for the bulk of spending on unmanned systems as a whole – budgets for land or sea-based systems are small by comparison. Despite the difficulties in developing this technology, the US defence department’s own 25-year plan published last year, says that “unmanned systems continue to hold much promise for the war-fighting tasks ahead”. It concludes by saying that once technical challenges are overcome, “advances in capability can be readily achieved… well beyond what is achievable today”.


US drones have now dropped more bombs than NATO
jets did in Kosovo in 1999, and a drone’s cold eye view
of a target.

One of those at the forefront of this research is Sanjiv Singh, robotics professor at Carnegie Mellon University and chief executive of Near Earth Autonomy.  Working for the US military and Darpa, the US defence research agency, his team has successfully demonstrated an unmanned autonomous helicopter. Using remote-sensing LiDAR (Light Detection and Ranging) lasers, the aircraft builds terrain maps, works out safe approaches and selects its own landing sites – all without a pilot or operator. They are now working on developing autonomous helicopters for the military that could carry cargo or evacuate casualties. This will be a huge step away from traditional drones which, he says, “are driven by GPS-derived data”.

“If you make a mistake in the flight plan, then they’ll drive happily into a mountain if it’s in the way.” With all the money being put into this sector, autonomous weapons systems will eventually be developed, argues independent defence analyst Paul Beaver.  The problem is that like nuclear weapons we can’t just uninvent something, and alarmingly, it is not rogue states that may want to develop this technology. “I think we’re about a decade away from organised crime having these sorts of systems and selling them to terrorist groups.”

Even though the technology for “killer robots” does not yet properly exist, campaigners say the world needs to act now. “There are so many people who see these systems as inevitable and even desirable, we have to take action now in order to stop it,” says Stephen Goose of Human Rights Watch. “There’s an urgency, even though we’re talking about future weapons.” Yet in focusing on military uses of autonomous drones, we might be missing the bigger threat that increasingly sophisticated artificial intelligence may pose. “The difficulty is that we’re developing something that can operate quite quickly and we can’t always predict the outcome,” says Sean O’Heigeartaigh, of Cambridge University’s Centre for the Study of Existential Risk.

In 2010, for instance, computer trading algorithms contributed to the “flash crash” that briefly wiped $1 trillion off stock market valuations. “It illustrates it’s very hard to halt these decision-making processes of computers, because these systems operate a lot quicker than human beings can,” he says.

Paradoxically, it is the civilian and not the military use of AI that could be the most threatening, he warns. “As it’s not military there won’t be the same kinds of focus on safety and ethics,” says Dr O’Heigeartaigh. “It is possible something may get out of our control, especially if you get an algorithm that is capable of self-improvement and re-writing its own code.” In other words, maybe it is not killer robots we have to worry about, but killer computers that control them.

Although AI developments would take many decades, in time these machines that were designed to keep us safe could in the long run ultimately become the biggest threat to mankind.