Last week, Georgia Institute of Technology's Center for Ethics and Technology held a debate between Ron Arkin and Robert Sparrow over the ethical challenges and benefits of lethal autonomous robots (LARS) or "killer robots." The debate comes amidst increasing attention to LARS, as the United Nations has recently agreed to discuss a potential ban on the weapons under the Convention on Conventional Weapons framework early next year. Moreover, we see more media attention in major media outlets, like the Washington Post, the New York Times, Foreign Policy and here on Huffington Post as well. With NGOs such as Human Rights Watch and Article 36 also taking up the issue, as well as academics and policy makers, much more attention may also be on the horizon.
My purpose here is to press on some of the claims made by Prof. Arkin in his debate with Prof. Sparrow. Arkin's work on attempting to formulate an "ethical governor" for LARs in combat has been one of the few attempts by academics to espouse the virtues of these weapons. In Arkin's terms, the ethical governor acts as a "muzzle" on the system, whereby any potentially unethical (or better formulated "illegal") action would be prohibited. Prof. Arkin believes that one would program the relevant rules of engagement and laws governing conflict into the machine, and thus any prohibited action would be impossible to take. Sparrow, a pioneer in the debate on LARs, as well as one of the founding members of the International Committee on Robot Arms Control (ICRAC), is vehemently skeptical about the benefits of such weapons.
Some of the major themes in the debate over LARs revolve around the issue of responsibility, legality, and prudential considerations, such as the proliferation of the weapons in the international system. Today, I will merely focus on the responsibility argument, as that was a major source of tension in the debate between Arkin and Sparrow. The responsibility argument runs something like this: since a lethal autonomous weapon either locates preassigned targets or chooses targets on its own, and then fires on those targets, there is no "human in the loop" should something go wrong. Indeed, since there is no human being making the decision to fire, if a LAR kills the wrong target, then there is no one to hold responsible for that act because a LAR is not a moral agent that can be punished or held "responsible" in any meaningful way.
The counter, made by those like Arkin in his recent debate, is that there is always a human involved somewhere down the line, and thus it is the human that "tasks" the machine that would be held responsible for its actions. Indeed, Arkin in his comments, stated that human soldiers are no different in this respect, and that militaries attempt to dehumanize and train soldiers into becoming unthinking automatons anyway. Thus, the moment a commander "tasks" a human solider or a LAR with a mission, the commander is responsible. Arkin explicitly noted that "they [LARs] are agents that make decisions that human beings have told them to make," and that ultimately if we are looking to "enforce" ethical action in a robot, then designers, producers and militaries are merely "enforcing [the] morality made by humans."
However, such a stance is highly misleading and flies in the face of commonsense thinking (as well as legal thinking) about responsibility in the conduct of hostilities. For instance, if a commander tasks Soldier B to undertake a permissible mission, where Soldier B will have very little, if any, communication with the commander, and in the course of Soldier B's attempts at completing the mission, Soldier B kills protected people (like noncombatants, i.e. those not partaking in hostilities), then we would NOT hold the commander responsible. We would hold Soldier B responsible. For during the execution of his orders, Soldier B took a variety of intervening decisions on how to complete his "task." It is only in the event of patently illegal orders that we hold commanders responsible under a doctrine of command responsibility.
Arkin might respond here that his "ethical governor" would preclude any actions like targeting of protected persons. For instance, Arkin discusses a "school bus detector" whereby any object that looks to be a school bus would be off-limits as a potential target, and so the machine could not fire upon that object. Problem solved, case closed. But is it?
Not by a long shot. Protected status in persons or things is not absolute. Indeed, places of worship, while normally protected become legitimate targets if they are used for military purposes (like a sniper in the bell tower, or storing munitions inside). Thus programming a machine that would never fire on school buses only says to the adversary - "hey! You should go hide in school buses!" Thus it is the dynamic nature of war and conflict that is so hard to discern, and attempts at codifying this ambiguity are so highly complex that the only way to accomplish this is to create an artificially intelligent machine. For otherwise, creating a machine that gives tactical and strategic advantage to the enemy, or in Arkin's words providing "mission erosion" is beyond a waste of money. Thus creating a machine that would not become a Trojan Horse requires that it is artificially intelligent and can discern that the school bus is really a school bus being used for nonmilitary purposes -- a machine that, it appears, Arkin would be uncomfortable with in the field.
The final argument in Arkin's arsenal is that if the machine, artificially intelligent or not, performs better at upholding the laws of war than human warfighters, then so be it. More lives saved equals more lives saved, period. Yet this seems to miss a couple of key points. The first is that the data that we have regarding all of the atrocities committed by service men and women are data points of when things go wrong. We do not have data on when things go according to plan - for that is considered a 'nonobservation'. Think in terms of deterrence. One cannot tell if deterrence is working, only when it is not. Thus saying that humans perform so poorly is only telling part of a much larger tale, and one, I'm not certain, requires robots as a solution to all of humanity's moral failings. The second is Sparrow's main point: using such machines seems profoundly disrespectful of the adversary's humanity. As Sparrow argues, using machines to kill distant others, where no human person takes even a moment to consider their demise, robs warfare of what little humanity it possesses.
Thus I hope that while we continue to think about why using robots in war is problematic, from moral, legal and prudential perspectives, we also continue to press on their touted "benefits."
No comments:
Post a Comment