Drone, Inc. - CONCLUSION: The Failure of Remote Control
“America does not take strikes to punish individuals; we act against terrorists who pose a continuing and imminent threat to the American people. Before any strike is taken, there must be near-certainty that no civilians will be killed or injured — the highest standard we can set.”
President Barack Obama, May 23, 20132285
“It takes a network to defeat a network.”
John Arquilla and David Ronfeldt, RAND Corporation286
Drones are an integral part of the massive new technology-driven intelligence, surveillance, and reconnaissance system that is slowly transforming the way the U.S. goes to war. Commanders believe that this system of “network-centric warfare,” i.e., the network of sensors, aircraft, computers, and analysts provides them with a precise way to find and eliminate alleged terrorists hiding in plain sight within the civilian population.
The methodology that the military follows is known as F3EAD: Find, Fix, Finish, Exploit, Analyze, and Disseminate.287
Find. The military tries to analyze electronic communications en masse. Its primary techniques are social network analysis and pattern-of-life analysis. The names and mobile phone serial numbers of potential targets are then added to special watch lists.
Fix. Drone sensors are programmed to log the activity of all mobile phones and radios within range and check to see if any of the devices from the watch lists have been turned on. If any are detected, the camera of the nearest drone can be automatically pointed on that area.
Finish. Troops are sent to capture or kill the target. If a “positive-ID” can be made, and the risk to civilians is minimal, drones can be authorized to fire missiles when troops cannot be deployed.
The latter three steps: Exploit, Analyze, and Disseminate involve the collection of data from targets under surveillance after capture/kill operations to find leads for more targets, and finally, create new lists of new targets.
The first problem with this “network-centric war” is that the hardware sensors often don’t work properly, as we have shown. The image quality is not good enough to determine the gender of the targets, let alone their identities. Thermal imagery sensors often miss entire people. The location data from these sensors is often missing altogether, making it hard to archive and search imagery. Phone numbers for targets are not always accurate. Inherent errors in calculating the location of the surveillance drones themselves throw off their ability to triangulate targets below. Thus even under the best circumstances, target geolocation data can be off by several meters.
The drone war is also heavily dependent on computer databases that contain hundreds of thousands of entries from arrests, informant tips, and biometric data gathered in numerous ways. Yet such databases are routinely inaccurate: In March 2017, a report by the Government Accountability Office in a report to Congress found that roughly 15 percent of U.S. citizens whose identities are stored in the FBI’s U.S. facial recognition database were flat out wrongly identified and that black people were subject to even higher mis-identification rates.288 The same inaccuracy holds for the notorious no-fly lists, which even
contain Congress members and military veterans.289 It is hardly likely that a database of Afghan or Yemeni citizens would be more accurate.
Next, there is the problem of faulty algorithms and software. Using radar to find a ship on a wide-open ocean is easy, as is detecting an intruder at a gate. Yet to this day, determining the precise location of a mobile phone, or even spotting tanks on the ground from the air, remain challenging. Given that cameras can easily be auto-cued to watch the wrong phone, it is no wonder that so many individuals have been reported “killed” multiple times, and that the bodies of children are regularly found in the debris after a drone strike.
The military often refers to the surveillance system, including drones, as the “Unblinking Eye.” But if DCGS, the heart of this system, is unavailable two-thirds of the time and when most users don’t understand how to use the complex, unwieldy beast, the number of cameras in the sky makes little difference.
Using mathematical models like Greedy Fragile to identify and destroy a network also won’t work. Pattern recognition from electronic data is deeply problematic, especially on a foreign culture with tribal networks that stretch back for centuries. It is also easy to confuse a politician with an insurgent or a weapons smuggler with a cigarette smuggler. Even hitting the “right” target might actually make a network more dangerous if it caused an insurgency to splinter or more cruel leaders to take control. Several studies conducted by Rex Rivolo of the Institute for Defense Analysis, the Pentagon’s think tank, have shown these outcomes in Afghanistan and Colombia. Rivolo quoted a U.S. soldier he met in Iraq who told him: “Once you knock them off, a day later you have a new guy who’s smarter, younger, more aggressive and is out for revenge.”290
Finally, the skillset of those hired to watch these so-called dark networks virtually guarantees confirmation bias: The majority—enlisted soldiers, straight out of high school—have no cultural tools to assess the data they gather; the for-profit contractors rely for their jobs and salaries on producing intelligence, even if it is less than adequate.
Can the technology be improved and is that, in any case, really the right question? The biggest hurdle in the drone war isn’t better cameras or a more aircraft or faster data transmission. Those might be solved with money and time. A much bigger problem is a lack of understanding of the assumptions behind the algorithms used to seek out targets, and a failure to honestly assess the quality of the raw data and auto-cueing systems.
Technology Cannot Replace Human Intelligence
Despite these problems, the drone/ surveillance program has many supporters who cite examples like JIEDDO which combined drone video with mobile phone network data to track down the individuals planting bombs on Iraqi roads to target U.S. vehicles. The program worked very well, but it had a crucial component that the drones in Pakistan and Yemen do not: Soldiers who could meet with tribal leaders, kick down doors, and interrogate people.
As Lt. Col. John Nagl, a retired Army battalion commander who helped write the new military counterinsurgency field manual, asked Wired magazine: “The police captain playing both sides, the sheikh skimming money from a construction project, what color are they?” referring skeptically to the color-coded targets on a computer map.291
Some battalion commanders are very much aware that no matter how much data they get from them, drones have their limits. “The enemy will be located not by satellites and UAVs (unmanned aerial vehicles), but by patient intelligence work, back alley payoffs, collected information from captured documents, and threats of one-way vacations to Cuba,” Maj. Gen. Robert Scales, former commandant of the U.S. Army War College, told a U.S. congressional hearing. “If I know where the enemy is, I can kill it. My problem is [that] I can’t connect with the local population.”292
Without a presence on the ground, sensor-led intelligence is dangerous. The argument that poor video can be fixed with good phone locations falls apart given that phone numbers are often swapped around. The big problem is confirmation bias, which cross-sensor cueing exacerbates, as does cursor-on-target systems. As the cliché goes, if you only have a hammer, everything looks like a nail.
Even drone contractors admit the problem with systems like DCGS. “You cannot automate analysis; that is judgment—it would be like automating a jury,” Patrick Biltgen of BAE told Military Geospatial Technology magazine.293
As journalists, we know only too well how hunches are often wrong and need to be carefully double-checked. Then there is the problem of red herrings, laid deliberately to frame certain individuals or to inflame tensions. None of these difficulties can be overcome by peering from two miles up through a virtual soda straw.
That is not to say that drones don’t work at all. While they cannot replace on-the-ground research, drones can, first, clearly provide quite good overwatch for soldiers in the field. Second, drones can be used to gather raw data for future analysis in remote areas to which the military has no quick or easy access.
Third, the drone program ensures that U.S. soldiers, fighting from the other side of the world, never come to physical harm. Finally, there is also no doubt that a laser-guided missile launched from a Predator or Hellfire can easily target and kill a specific person or destroy a specific building, if their location is established beyond the shadow of a doubt.
Despite these limitations, when commanders observe suspected insurgents through a screen, they place far more faith in the ability of the technology inside the machine to discern truth and to make life or death decisions, far more than they would in an individual human.
“They want to apply the technology without the brainpower. The difficulty is that those who put forth this argument believe that something fundamentally has changed, and you can change very quickly without thinking your way through it,” Lt. Gen. Paul Van Riper, former president of the Marine Corps University told PBS. “Nothing has happened that’s going to change the fundamental elements of war.”294
Auditing the Drone Program
There are very few publicly available audits of the overall effectiveness of drones, given their clandestine use. One of the only detailed assessments is an eight-year-long investigation of Predator surveillance of the U.S.-Mexico border (referenced earlier) conducted by the inspector general of the Department of Homeland Security (DHS). It recommended that, given the low rate of detection of border crossers by drones, the government would be much better off investing in alternatives such as manned aircraft and ground surveillance.
“We see no evidence that the drones contribute to a more secure border, and there is no reason to invest additional taxpayer funds at this time,” Inspector General John Roth concluded in a January 2015 press statement.295
Pressed for an explanation as to why the program had failed to meet its own goals, DHS continued to insist that the drones had been worth the money. “We are working on metrics which have never been done before,” Randolph Alles, assistant commissioner in charge of the Office of Air and Marine, told a July 2015 hearing of the Homeland Security Subcommittee on Border and Maritime Security. “How do you characterize air support? How do you characterize the effectiveness of an aircraft for surveillance? How do you put a dollar value on it?”296
Yet Homeland Security had set out crystal clear goals when starting up the program in 2004: increased apprehensions of illegal border crossers, a reduction in border surveillance costs, and improvement in the Border Patrol’s efficiency. None of these goals were met.
Could the same be true of the targeted killing program? On the face of it, one might assume that the two programs have different goals. Yet, border drones, with only one simple task—identify people crossing a clearly defined border—failed miserably. How then could the same sensors on the very same aircraft identify terrorist plots and plotters across an entire region?
A 2012 audit of Air Force surveillance programs by the House Permanent Select Committee on Intelligence complained that the military was measuring the success of its systems in much the same way as Homeland Security.297
Instead of measuring how many high-value targets are caught, the Pentagon “tends to measure outputs, e.g., how long can a platform stay on station, what is the resolution of the sensor’s imagery, or even how many requests were made for a given sensor’s data and anecdotal evidence about what is useful or not in theater,” the auditors wrote.
And then there is the problem of the size of the data haystack the drones are producing. Publicly released studies suggest that the deluge of data they collect has not proven very useful, if only because there is too much flowing back to the analysts.
“The rapid proliferation of sensors both enables and overwhelms the current ISR infrastructure,” concluded the Defense Science Board in its 2008 study. “The number of images and signal intercepts are well beyond the capacity of the existing analyst community so there are huge backlogs for translators and image interpreters and much of the collected data are never reviewed.”298
Even when reviewed, the data can be just wrong. Several official studies have clearly shown that drones make major mistakes because of bad data. Dr. Larry Lewis, formerly with the Center for Naval Analyses, conducted a study of civilian casualty incidents in Afghanistan, where ground troops conducted battle damage assessments following airstrikes. He identified civilians killed or wounded in 21 cases. Yet in 19 of those 21 cases, preliminary evaluations conducted via drone video cameras had identified no civilian casualties. Though his study remains classified, Lewis has spoken out about his findings.
“The fact that I had been looking at air operations in Afghanistan for a number of years led me to suspect that what I found was in fact the case,” Lewis told the Guardian newspaper.299
In 2015, The Intercept published details from several other studies leaked by
a whistleblower. One showed that during a five-month period in eastern Afghanistan, more than nine out of every 10 people killed in U.S. drone strikes weren’t the intended targets.300
In his investigation into the Uruzgan strike that killed 23 innocent villagers including women and children, Maj. Gen. James Poss concluded: “Technology can occasionally give you a false sense of security that you can see everything, that you can hear everything, that you know everything.”301 Vicki Divoli, former deputy legal advisor to the CIA’s Counterterrorism Center, was even more succinct: “Intelligence is not evidence.”302
But one doesn’t even have to investigate the killing program in Pakistan and Yemen to show that drones can’t accurately find people. New York Times reporter David Rohde can personally testify to this. He was kidnapped near Kabul and imprisoned for seven months in Waziristan before he escaped in June 2009.
“My family said U.S. officials had told them that they searched exhaustively for me with drones, but had been unable to locate me,” Rohde wrote in The Atlantic magazine. “When I met U.S. officials, they told me that they had not known I was being held prisoner in the house close to [a] drone strike in Makeen.”303
Warren Weinstein and Giovanni Lo Porto were not as lucky. Like Rohde, they were never located by the U.S. even after three years in captivity in the Pakistan borderlands patrolled by drones. In January 2015, the two hostages were killed when drone commanders signed off on a strike after surveillance data indicated that that no civilians were present.
So why use drones? “The drone’s unique characteristic — that it is piloted from the ground not the air — cloaks it in a technology that seems to intrigue policy makers. It gives them a self-perceived license to employ the system over ambiguous or hostile territory such as Pakistan, and Iran,” Winslow Wheeler of the Center for Defense Information in Washington, wrote in Time magazine after reviewing studies from DOT&E. “The wide and enthusiastic popularity for … drones, in the Defense Department, the Executive branch, Congress, the mainstream media and think tanks is not rationally explained by Reaper’s poor to mediocre performance.”304
Indeed, Wheeler notes that drones are typically twice as expensive to build as piloted aircraft and four times as expensive to maintain because of the enormous numbers of ground personnel required to support them. (The studies also found that drones are more vulnerable to attack and to crash—topics that are beyond the scope of this report.)
There do appear to be some internal (and likely classified) measurements of the effectiveness of drone surveillance. At least four contractors—Booz Allen, IBM, Northrop Grumman and RadiantBlue—claim to have developed tools for this precise purpose, mostly to identify cost savings, but also to aid in planning.305
Phil Eichensehr of RadiantBlue told congressional staff that their BlueSim product was being used by the Joint Staff to “model technical performance of ISR platforms and sensors.” Frank Strickland of IBM claimed that its tool could be used to “evaluate the effectiveness of each platform and sensor for counterinsurgency.”
A set of leaked slides from IBM analyses, published by The Intercept, provides further proof that these tools have been used internally. One of these slides provided a tantalizing, if obscure, clue: a statistical breakdown of what tracking technologies were used to locate drone targets.306 While the numbers themselves don’t mean much, the existence of the slides suggests that the Pentagon has kept track of the use of surveillance technology and has even provided it to outside analysts. Such data should also be provided to independent human rights observers. Such contractor-driven studies may not be as critical as one might hope, but there are several other government mechanisms by which the targeted killing system can be held to account. That is the subject of our final chapter.
< Previous • Report Index • Download Report • FAQ/Press Materials • Watch Video • Next >