On this week’s episode of Unauthorized Disclosure, Kevin Gosztola and I spoke with Mary Wareham of Human Rights Watch about autonomous killer robots.
(You can download the episode here or subscribe for free on iTunes here)
Wareham is the advocacy director of HRW’s arms division, where she leads the global coalition to stop killer robots. She recently co-authored the report, “Shaking the Foundation: The Human Rights Implications of Killer Robots.”
Speaking to us from a meeting on lethal autonomous weapon systems convened by the United Nations in Geneva last week, Wareham described the ethical and legal challenges posed by killer robots, which are in development now.
“Our concern is what happens when you take the human out of the loop, out of that decision-making loop, and the system decides for itself what to target and how to fire,” explained Wareham. “Plus there’s this bigger issue when the human is removed from the loop as to what is the command and control over it. Is the sergeant still responsible for the device? Is the manufacturer responsible for the device? Is there product liability there? Is it the programmer’s responsibility? And that’s not really resolved yet.”
For the discussion portion of the show, Kevin and I talked about the growing demands for transparency around lethal injections, overwhelming American support for the death penalty, a survey that found 4 in 10 people around the world believe they would be tortured by their government if detained, updates about Guantanamo force-feedings and the mainstream media’s shameful distortion of the Palestinian Nakba.
Below is a partial transcript of our interview with Wareham.
GOSZTOLA: Can you give us an overview on what were the main points or highlights of this Convention on Conventional Weapons (CCW) meeting? Anything significant that you took away that you’re going to be working on?
WAREHAM: We launched a global coalition called the Campaign to Stop Killer Robots a year ago in April. Human Rights Watch is the coordinator of that and that’s my task. And we called for nations to start discussing the challenge of autonomous weapon systems that would be able to do two things: select targets and engage force without any human control.
These are weapons that do not really exist yet. We’re calling for a preemptive ban on their development, use and production. And we’re doing that because of the research that roboticists, scientists and Human Rights Watch have done looking into planning documents and trends in military robotics, where this is actually something that we believe the technology is heading in that direction
So we launched the campaign a year ago and last November nations agreed at this particular treaty, Convention on Conventional Weapons, to start discussing it. It was surprising to us that they would agree that quickly to begin discussing what they called an emerging challenge. And they wanted to talk about questions related to legal, technical, ethical and operational or military concerns that have been raised about these weapons.
It’s not just Human Rights Watch and the campaign. There was an influential report by a UN Special Rapporteur just a month after we launched in May last year. It was delivered to the Human Rights Council, and that report also expressed many concerns about what that report called lethal autonomous robots. So nations decided last November to started talking about this year so this was the talks I’ve just been attending.
It was a four-day what they called meeting of experts, but if you walked in the room it was a huge plenary room with 87 countries represented plus the Red Cross, the UN agencies and non-governmental organizations down the back of the room. They talked for four days. They talked through all of these different challenges and aspects related to autonomy and warfare and autonomous weapons.
They didn’t come to any conclusions because it was not that kind of a meeting. It was an informal meeting of experts, but at the end of the year they’ll have to decide whether or not to continue and whether or not to take it from talking about these concerns to do something about it. But this was the first multilateral meeting that has ever been held that we know of on this topic so that’s what we were doing here in Geneva this week.
KHALEK: For people who are unfamiliar with autonomous killer robots, could you explain why it’s problematic to let computers make the decision to kill?
WAREHAM: There’s several examples of existing robotic systems that are out there right now with various degrees of autonomy and lethality. We’ve called those precursors, and in the meeting this week the governments acknowledged that those precursors are there and that they indicate a trend toward ever greater autonomy in warfare.
I guess armed drones are a precursor, but the difference between an armed drone and the so-called killer robots that we were talking about is that the drones still have the human in loop. The human operator, not necessarily in the cockpit itself, but back at the base in Nevada or wherever usually with a whole team of people, who are deciding by looking through the video footage on the cameras that are attached to the drone—They look through that footage and they determine what to target, what’s a legitimate target to go after and then when to use force. And somebody actually fires the button if they are using an armed drone that way.
Our concern is what happens when you take the human out of the loop, out of that decision-making loop, and the system decides for itself what to target and how to fire.
Some of the precursors that are out there at the moment: Airborne, this very large fighter aircraft that the United States is investing a lot of money in, the X-47B. In the UK, there’s one called the Taranis. We understood that China is also looking at this technology. Then we’ve got stationary ground devices such as this Samsung device on the border between North and South Korea that’s got a sensor on it. And that’s what we call one of the in between ones, the human on the loop weapon whereby it is equipped to fire but at the moment if it senses movement it signals back to the base and a human operator looks at what the device is looking at and takes the decision to fire.
So it’s all about the human control over those decisions to target and attack and to what extent we’re comfortable with going down that road.
GOSZTOLA: What can you say about the US government’s position as it was maybe shared at this meeting or perhaps anything that’s been stated previously?
WAREHAM: Human Rights Watch first issued a report on this back in November 2012 called “Losing Humanity.” In that report, we outlined our concerns and we looked at international law and whether or not these weapons would be able to comply with international law and found many, many questions with that, which I won’t go into now, but I raise it because just a few days after we issued that report the US Department of Defense issued its first policy directive on autonomy in weapons and in that policy directive it articulates what the US government thinks about it.
There’ve been different interpretations of it, but our read on it was that for the duration of the policy, which is for the next five to ten years, the United States government is committed to ensuring there is appropriate human oversight of autonomy and autonomous weapons. There is an annex in that memo that details many technical concerns that the United States is looking at or at least the Pentagon with respect to autonomous weapons—hacking and spoofing and jamming and enemy counter-attacks and things that go wrong. That was quite interesting. And then the United States supported these talks in Geneva last November. It sought substantive discussions. It wanted even longer amount of time dedicated to it. And it came to the meeting and it engaged with a large delegation; not just at the beginning but throughout the different sessions
There are points where we diverge or disagree with some of their views but on the whole it was a fairly positive contribution at the meeting. And the United States indicated that they would be willing to continue with these discussions next year.
KHALEK: What can you say about the issue of accountability? That’s an issue that has been discussed a lot in terms of if an autonomous killer robot in some near-distant future scenario were to kill innocent civilians. Who would be held responsible?
WAREHAM: We’ve got to look at right now who would be held responsible for civilian deaths in armed drone strikes by the United States. My other colleagues at Human Rights Watch work on this and you know there’s all sorts of issues related to that with lack of transparency, the role of the CIA and the Department of Defense’s targeted killing policy. How do we determine what’s a target?
So, you take those issues, which we still haven’t resolved, and then you apply to autonomous weapons technology. We don’t hold out a lot of hope for accountability at the moment with this. Plus there’s this bigger issue when the human is removed from the loop as to what is the command and control over it. Is the sergeant still responsible for the device? Is the manufacturer responsible for the device? Is there product liability there? Is it the programmer’s responsibility? And that’s not really resolved yet.
It was an item discussed at length here at the meeting in Geneva. There would different views expressed on it, and it’s one of the kind of key concerns that we’re going to have to resolve as we go forward if they want to go ahead with these types of weapons. If they don’t, and we get our way, we secure the prohibition, then hopefully we won’t have to worry about accountability because they won’t be used.
GOSZTOLA: Something that Human Rights Watch addresses in this recent report, “Shaking the Foundations,” is that the principle of dignity would be undermined with these weapons. Could you talk a little bit about that for our listeners?
WAREHAM: We’re very concerned that fully autonomous weapons would contravene foundational elements of human rights law. This is not just a problem of international humanitarian law.
One of the big ones is the right to life, which is a prerequisite for all other rights, and we’ve detailed our concerns about the fact that machines do not have the same kind of judgment or compassion as humans do or the capacity to identify with human beings and that could lead to arbitrary killings of civilians in any circumstances, law enforcement or in armed conflict. And human dignity is a key principle here.
Do we want to allow a robot or machine to take a life on the battlefield or off the battlefield? So human dignity is a big part of that and then there’s the broader moral questions associated with the degree of comfort that we have ceding control over to a robotic device.
KHALEK: How about the involvement of corporations, contractor who actually are researching and developing various autonomous robots? I don’t know if they are all only developing killer robots. But I know that’s being developed by Boston Dynamics and an array of companies. Are they involved in these conversations as well?
WAREHAM: They were not in the room at the United Nations this week. It was governments, UN agencies, non-governmental organizations. There were no corporations there. I’ve been to previous meetings where arms manufacturers have been in the room. But not at this one—at least none that I could tell.
Although, there’ve been a lot of prep meetings in the lead-up to this. There was one at Chatham House in London that BAE Systems, the developer of the Taranis, actually helped to co-sponsor, which was quite interesting. A lot of people were there for BAE systems and I think they realize that they need to at least be—
And they showed a video. They gave a talk. They were trying to I think show that they’re willing to be more transparent on this and so that was interesting. I guess you’ve got to go one by one with them. We’ve not talked to Samsung in Korea about this.
In the United States, there are a couple of large companies involved in it, but there’s also DARPA, the Defense Advanced Research Project Agency at the Department of Defense, which is the kind of development wing for all new technologies. And they were not on the US delegation of this meeting I think, but they have definitely been doing more proactive media on this and they’re very well-aware that there is a debate stirring on this and now there is an international process underway.
The companies don’t make laws. It’s the governments who do so that is I guess our primary focus in our advocacy. But we would like to see the companies I guess acknowledge that this debate is happening and be way more transparent about it. And then if we do get to the point where we’re legislating new international law on this internationally and domestically, in the US and elsewhere, they’ll have to abide by what they’re told to do.
We’ve been asked about Google a few times. At the end of last year, Google purchased a few of the companies. Boston Dynamics, right, and a few others. SCHAFT in Japan, that had been participating in this robotic challenge that DARPA has been running, which is still going. It’s a challenge to construct a humanoid robot that could go on to the battlefield and remove soldiers, that could go into a burning building or Fukushima. That’s the one that we always hear about, the nuclear power plant in Japan after the disaster, go in and there and turn things off and extract things from the rubble and that kind of stuff.
That’s what they are saying they want to develop these for is for humanitarian response for disaster relief. Those kind of things.
We’re against the arming of robots so that they’re fully autonomous. We’re not against robotics in general or even autonomy but it’s the point at which becomes weaponized so that it can be used on the battlefield or in law enforcement that concerns us.
[…] our interview with HRW’s Mary Wareham on the last episode of Unauthorized Disclosure, Bhatt added, “I don’t mean to diminish the importance of some of […]