Discussions of unmanned vehicle refer frequently to the “unprecedented” legal challenges posed by unmanned or autonomous vehicles (UAS).[1] Focusing on the most extreme and remote of theoretical consequences of artificial intelligence fosters a skewed perception that all legal challenges posed by UAS are dire. This informs calls to ban outright autonomous lethal systems, even in advance of robust, practical artificial intelligence applications.[2] Considering that many aspects of autonomy remain in initial stages, crafting policy to address remote consequences does nothing to ensure the development of UAS that gain military advantages, unconstrained by restrictions founded in speculation about the future state of the art.
The strongest critics of UAS refer to autonomous UAS rather than automatic UAS. A careful distinction needs to be made between automated (or semi-autonomous) and autonomous UAS. Automatic weapon systems have a long history. Historic naval mines or even sophisticate modern cruise missiles exemplify the characteristic of automatic systems: no matter the nature or sophistication of their programing, they only react within the parameters of their programing. A truly autonomous device, in contrast, would require artificial intelligence capable of independent decision making in response to external stimulus.
Undoubtedly, the expanding use of UAS raises legal issues. But understanding how technology will be regulated by existing legal systems does not require drastic changes. For example, the presence of UAS in domestic airspace is forcing the Federal Aviation Administration to rewrite the U.S. federal laws governing use of airspace and air traffic safety regulations. Similarly, maritime UAS are required to comply with international collision regulations (COLREGS) and the inland rules of the road. However, these regulatory concerns will be resolved through extant federal agencies, commercial entities, and the U.S. court system. Analogies can be seen in legal adaptations to aviation in the early 20th century and space flight in the later 20th century. While the domestic and international law governing those means of transportation is far from perfect, they show that the law can adapt to novel technologies.
One could argue that even if international law accepts the current state of the art, we should be prudent and hesitate to endorse any technology that could cause negative effects. Such logic supports the “precautionary principle”; it holds that novel technologies must not be adopted until their effects, direct and indirect, are well known. However, the law of armed conflict does not require adopting such a principle. Indeed, history shows that many novel military technologies were initially banned and then adopted based on the interplay of strategic needs and shifting international norms. Early 20th century movements to force submarines to comply with prize rules or other legal provisions relevant to surface vessels, for example, faded with the expansion of submarine fleets.[3]
Perhaps the development of truly autonomous artificial intelligence capable of “thought” will force a rethinking of international or domestic law. But until artificial intelligence functionally equivalent to human decision-making is developed, we need to address increasing levels of automation in gradual increments. Highly-automated technology differs from automated technology used throughout the 20th century in degrees rather than substance. Unlike the profoundly disruptive advent of nuclear weapons, automation does not introduce new destructive forces, but rather extends existing trends of automating command and control. Weapons relying on sensors to discriminate between targets, for example, have been in service for some time with little controversy.[4] Unmanned vehicles have conducted complex operations, like the X-47’s carrier landings, previously requiring direct human control. Targeting procedures are no less complex, and yet an unmanned vehicle can accomplish those procedures with precision equal to or exceeding human operators. So—unmanned technology doesn’t force us to ask fundamentally new questions, but to develop better responses to existing ones.
These distinctions are more than academic interest. Planning for the Fleet of the future requires anticipating what technologies will be needed five, ten, 15, or even 20 years from now. We have reached a point in technological development where we cannot reliably predict what technologies will be essential. Accordingly, we should not impose restrictions based on the current state of the art or speculative concerns, including challenges based on regulations developed for 20th century technology. We can identify core values and guide our use of emerging technology as it develops, without imposing legal restrictions doomed to obsolescence.
Footnotes
[1] For example, Stuart Russell “Ethics of Artificial Intelligence” Nature Vol 521, p. 415, 28 May 2015
[2] Most notably, on July 28, 2015 3,105 robotics researchers and 17,701 other interest parties signed an “Open Letter on Autonomous Weapons,” available at: https://futureoflife.org/open-letter-autonomous-weapons/
[3] See, for example, J. Ashley Roach “Legal Aspects of Modern Submarine Warfare,” Max Planck Yearbook of International Law, Vol. 6 (2002), pp. 380-84.
[4] See, for example, Alan Backstrom and Ian Henderson, “New Capabilities in Warfare” in The Applied Ethics of Emerging Military and Security Technology (Routledge 2016).