Deadlier drones are coming


COLUMBIA, S.C. — Aerial drones are America’s newest frontline weapon in an escalating global campaign against Islamic militants. And they could get a lot more dangerous in coming years as their underlying technology advances.

Compared to today’s fairly rudimentary Unmanned Aerial Vehicles (UAVs), the drones of the future will be faster and more heavily armed. They will also have better sensors plus more sophisticated computers allowing them to plan and execute attacks with less human participation.

But military analysts and experts on the future of warfare fear these robotic drones could also wind up in the arsenals of more US agencies and foreign governments. That, they add, raises the specter of a whole new kind of conflict which would essentially remove the human element — and human decision-making — from the theater of war.

“Advances in AI (artificial intelligence) will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input,” the Air Force stated in its 30-year plan for drone development. The flying branch said it is already working to loosen those policy constraints, clearing a path for smarter, more dangerous drones.

The prospect of even bloodier robot-waged warfare has some experts pleading for a ceasefire, or at least a pause in the pursuit of lethal technology. They say the technology is moving faster than our understanding of its possible effects, and leaving no time to find answers to the moral questions posed by the technological advances.

Automaton Arsenal

“I think the American people expect us to use advanced technologies,” John Brennan, the top counterterrorism advisor to President Barack Obama, said in an April speech at the Woodrow Wilson International Center for Scholars in Washington, DC.

“What has clearly captured the attention of many, however, is … identifying specific members of Al Qaeda and then targeting them with lethal force, often using aircraft remotely operated by pilots who can be hundreds, if not thousands, of miles away.” 

In the past 11 years, the Pentagon and Central Intelligence Agency have steadily built up a globe-spanning robotic strike force involving hundreds of missile- and bomb-armed Predator and Reaper UAVs, plus thousands of human controllers based in the US and abroad.

An Air Force planning document from 2011 shows the current force of around 250 armed drones more than doubling in the next decade. 

Military drones tend to operate out in the open in war zones such as Afghanistan. The CIA focuses its robot attacks in countries where the US prefers to keep a low profile — Yemen, for example.

They usually fly from airfields in the same regions as their target zones, a constellation of overseas drone bases — some of them top-secret — arrayed in a geographic swathe from Afghanistan to Pakistan south to Yemen, Ethiopia and the Seychelles, an island nation in the Indian Ocean.

But the frontline robots are supported by a vast infrastructure of US bases used for training, remote piloting and image analysis. A June report by the internet-based nonprofit watchdog group Public Intelligence, identified no fewer than 64 current and planned military drone bases on American soil

And drones are spreading within the US government as well. Once strictly military and CIA assets, UAVs have begun flying with Customs and Border Patrol and the Department of Homeland Security.

Modern military UAVs debuted in the mid-1990s with the advent of smaller computers and stronger satellite links. But the real growth in the drone arsenal occurred after 9/11, as escalating counterinsurgency and counterterrorism campaigns demanded better ways of finding and killing militants.

Drones were cheaper, could fly longer and were smaller and therefore less obtrusive than manned aircraft. Plus, there was no onboard pilot at risk.

“They can be a wise choice because they dramatically reduce the danger to US personnel, even eliminating the danger altogether,” Brennan said in the speech, a rare public address for a senior intelligence official.

The pace of drone attacks has steadily increased since 9/11. Brennan credited robotic strikes with “surgical precision” able to “eliminate the cancerous tumor called an Al Qaeda terrorist while limiting damage to the tissue around it.”

But it’s apparent that drone attacks aren’t flawless. Human operators peering at grainy video shot by high-flying UAVs have repeatedly mistaken civilians for militants.

An AP investigation in Pakistan’s North Waziristan published in February found that at least 194 people had been killed in 10 separate attacks over the preceding 18 months. At least 138 of the dead were militants, according to 80 villagers interviewed by the AP. The other 56 victims were either civilians or police

In all, a minimum of 2,800 people have died in no fewer than 375 US drone strikes in Pakistan, Yemen and Somalia since 2004, according to a count by the UK Bureau of Investigative Journalism. Many hundreds of those killed were probably innocent bystanders. 

For all the violence they inflict, right now armed UAVs are constrained by their comparatively simple airframes and engines.

The 27-foot-long Predators and 36-foot-long Reapers, assembled by General Atomics in California’s Mojave Desert, are America’s main armed drones.

Both are powered by propeller engines and have long, straight wings that limit their top speeds to 100 per hour for the Predator and 200 miles per hour for the Reaper, fractions of the speed of a manned, jet-powered F-16.

Today’s drones are equally limited in their ability to sense the ground below them, detect targets and move to attack without assistance. Therefore Air Force and CIA operators must closely supervise most aspects of “unmanned” missions.

Human controllers at the launch bases handle takeoffs and landings using line-of-sight radios. Once the drones are on their way to the target zone, guided by GPS, control passes to crews sitting in trailers in Nevada, New York, California and other US locations. The crews relay commands to the aircraft via satellite.

“They utilize, in general, computer technology not dissimilar to what’s on an office desk,” explains Mark Draper, a researcher with the Air Force Research Laboratory in Ohio. “Lots of displays, a mouse, a keyboard. It may have a trackball, and in the case of certain Air Force UAVs you have a stick and a throttle as in a manned aircraft.”

Standard procedure is for one crewman to control the drone’s sensors, potentially including daytime and night-vision video cameras and high-resolution radars. If the sensor operator spots a target, he alerts the aircraft pilot, who can then order the drone to launch a missile or drop a bomb. The robot does essentially nothing without direct human input.

“Unmanned vehicles — even the most advanced in the military — are one step above remote control,” says Missy Cummings, a robot developer at the Massachusetts Institute of Technology.

But if a host of government and private research initiatives pan out, the next generation of drones will be more powerful, autonomous and lethal … and their human operators less involved.

“In the future we’re going to see a lot more reasoning put on all these vehicles,” Cummings says. For a machine, “reasoning” means drawing useful conclusions from vast amounts of raw data — say, scanning a bustling village from high overhead and using software algorithms to determine who is an armed militant based on how they look, what they’re carrying and how they’re moving.

“There’s no plan for humans to be totally out of the loop,” says Ryan Calo, a Stanford University researcher. “But there are pressures that create incentives for ever more autonomy,” he adds.

Every time engineers dial up a robot’s autonomy, reducing the need for human control, they also dial up the risk.

“Military robots are potentially indiscriminate,” says Patrick Lin, another Stanford researcher. “They have a difficult time identifying people as well as contexts, for instance, whether a group of people are at a political rally or wedding celebration.”

Next-Gen Drones

Robots that possess the ability to reason might not need human beings to make so many decisions on their behalves. Drones have the potential to be more efficient without the burden of direct human control.

“The ability to compute and then act at digital speed is another robotic advantage,” Peter Singer, an analyst with the Brookings Institution in Washington, DC, wrote in his seminal book on robot warfare, Wired for War.

Plus, reducing the burden on the controllers could break a major, and building, bottleneck in military drone operations. The Air Force has scaled back a planned expansion of its UAV fleet after running out of trained operators. “Our number-one manning problem in the Air Force is manning our unmanned platforms,” Gen. Philip Breedlove, commander of US air forces in Europe, told The Los Angeles Times

Greater robot autonomy could herald a major expansion of the drone war. What’s less clear is the potential human cost.

The Teal Group, an aerospace research firm, predicted that worldwide military UAV spending could almost double over the next decade from $6.6 billion this year to $11.4 billion in 2022, in constant dollars. A Predator costs around $10 million, not counting the ground stations; the price of a single Reaper is triple that. 

Today only the US, the UK, Italy and Israel operate armed UAVs — all of them propeller-driven. But scores of nations possess unarmed flying robots. And France, Germany, Sweden, Russia, China and Iran are all developing their own weapons-carrying models, including some with jet power.

In the US, no fewer than four jet-propelled, armed UAVs are in testing and could begin to replace the Predators and Reapers in just a few years. Jet propulsion combined with swept wings translates into higher top speeds and better reaction times in chaotic battle conditions.

So far, only in laboratory simulations at labs like Boeing, Northrop Grumman and MIT have highly autonomous armed drones been cut loose to wreak havoc all on their own.

It’s no small task programming a robot to handle every possible problem it might encounter during a combat mission — everything from bad weather to changes in the landscape, unexpected civilians on the battlefield and new types of camouflage and defenses on the part of the enemy.

Robot developers are trying to build massive “what-if” software databases detailing out every possible scenario. Gathering the data is a painstaking effort that’s only now getting underway, says Stefanie Tellex, who works alongside Teller and Cummings at MIT. “We’re seeing the beginning of efforts to apply large data-sets to robots in order to increase their robustness and level of autonomy.”

As drones become capable of handling more of the maneuvering and scanning on their own, human crews should intervene only when the drone encounters a problem it knows it cannot solve. “We don’t know a lot about how to tell a machine how to handle surprises,” says Randall Davis, another MIT robot developer.

So for the near term developers are focused on devising more, but not totally, autonomous drones requiring less human control.

That could free up the operators to do what people do best: make educated guesses and find creative solutions to unexpected problems — but only when necessary. Robots “aren’t going to replace the need for a thinking human being to make decisions that are influenced by experience in a wide range of situational considerations that you just can’t program into a machine,” Carl Johnson, a Northrop vice president, told GlobalPost last year.

But it’s better to keep the drone on constant duty and call in the human beings only rarely.

“Humans contribute the things humans are good at, and robots contribute what robots are good at,” is how MIT’s Seth Teller describes the dynamic to GlobalPost.

Mike Patzek at the Air Force Research Laboratory says he’s working on simplified drone control systems that do away with joysticks, keyboards and multiple computer screens in favor of a single elegant screen combined with two-way voice communications.

A drone operator could simply talk to his robot and the robot would talk back. That technology, which is still in the conceptual phase for UAV operations, is nevertheless in daily use, in an admittedly rudimentary form, in Apple’s Siri app for smartphones.

Even as human operators progressively turn over more and more control to the robots, one task in particular will, for now, almost certainly remain in human hands: deciding when a robot should launch a weapon.

“Even though it’s possible for a [UAV] to find a target, identify it and give those coordinates electronically to a weapon, it won’t do that unless it’s told to,” Johnson says. “The technology is there, but there is still a need for a human in the loop.”

That’s not just Johnson’s preference: it’s government policy that a human operator must approve every weapons release by a drone. In practice, that means a UAV pilot, steering his robot via satellite, literally pulling a trigger to tell the machine to open fire with missiles or bombs.

The reasons for this policy are legal and ethical. “If a UAV is nearly fully autonomous and puts a bomb on a school bus and not a supply truck, which gets held up for the penalty?” asks one Boeing drone developer who spoke on condition of anonymity.

“The absence of human intervention during the weapons release process proves problematic when determining who is to be held accountable following violations of the law of armed conflict,” Navy Lt. Cmdr. John Klein explained in a 2003 article. 

The Air Force is now mapping the policy changes necessary to clear the way for self-directing, armed drones. “Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions,”it stated in its 30-year drone plan

Despite the huge obstacles to building fully autonomous killer robots, the military is already thinking over the implications — in essence, clearing the airspace for these more lethal drones to eventually take flight.

Amoral Machines

Highly autonomous robots could pose big problems, and not just legally, Stanford researchers Calo and Lin warn. While remote, there is a chance that a highly sophisticated drone could go rogue in combat. How this could happen has to do with the software that could guide future robots’ thinking.

One way to achieve the machine “reasoning” — that’s MIT professor Cummings’ term — is to program a robot with what Calo calls “genetic algorithms.” These sophisticated computer codes refine themselves through trial and error “until they arrive at the best way of doing something,” Calo says. “Sometimes the resulting behavior is truly emergent.”

“Emergent” is academic-speak for unexpected and amazing.

“Autonomous robots are likely to be learning robots, too,” Lin says. “We can’t always predict what they will learn and what conclusions they might draw on how to behave.”

Genetic algorithms could mutate a smart but obedient robot into something uncontrollable. The worst case scenario is that the Pentagon, CIA, other government agencies and allied armies equip themselves with cutting edge drones that, in teaching themselves to find and kill militants, also learn bad habits. Instead of only attacking men wielding weapons, the robots might decide to kill all men or boys, too.

“If you combine the possibility for emergent behavior with weapons systems, that would be problematic,” Calo says.

It’s not inconceivable that increasingly sophisticated drones will soon require little more than a few words from a human operator when it comes time to fire a missile or drop a bomb against a target that the robot located itself.

“We’re reasonably confident that a human can act ethically, to distinguish right from wrong, but we have no basis yet for this confidence about robots,” Lin cautions.

Today roughly a quarter of all the people killed in human-controlled drone strikes are innocent bystanders. In the more optimistic scenario, smarter drones could reduce the percentage of civilians among the dead. But more frequent robotic strikes — a likely consequence of improved technology — could mean an overall increase in the number of innocents killed. 

For all the thousands of people — many of them innocent — that have been killed so far in America’s escalating drone war, the real bloodletting could still be in the future. Historians may judge the first decade of lethal, but rudimentary, drone strikes as a prelude to much more sophisticated robot warfare whose efficiency translated directly into more attacks … and whose autonomy risked unleashing rogue killer robots on an unsuspecting world.