Ethics Homework

Is “a ban on offensive autonomous weapons beyond meaningful human control” going to work?

The proposed ban on offensive autonomous weapons airs a number of valid ethical implications of developing that particular military technology. In particular, the petition names the ease of terrorist/dictator-esque powers acquiring the technology due to its affordability and its drastic effects, as well as the public backlash that would curtail future societal benefits from AI. These potential negative effects, however, has not stopped the innovation of military technology before and continues not to, as evident through the current progression towards developing autonomous technologies, which violates the ban’s need for unanimous (or a wide majority of) cooperation by major military powers.

The proposal says that “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” However, it is clear that the global AI arms race has already begun, and it would be extremely difficult to stop something that is already beginning to play out among major powers who are have already invested in the enterprise. Although fully autonomous weapons don’t currently exist, high-ranking military officials say that the use of robots will be widespread in warfare in mere years. There are already “at least 381 partly autonomous weapon and military robotics systems… deployed or are under development in 12 states, including China, France, Israel, the UK, and the US.” As a few examples:

As an aside: the letter proposing a ban, which was released in 2015, refers to AI technologies as becoming the “Kalashnikovs of tomorrow” -- but, in fact, Russia’s automated, neural-network based guns are Kalashnikovs themselves, and the “tomorrow” that it predicts is already nearing the “today.”

Further, major military powers have already expressed explicit support for developing artificial intelligence and automation in the realm of warfare. The US and the UK have released positions saying that it is too early for a ban on lethal autonomous systems , and focus on the potential humanitarian benefits of the technologies rather than criticizing its potential consequences. Vladimir Putin, Russian president, also makes his stance clear, saying that “Artificial intelligence is the future, not only for Russia, but for all humankind.” He claims that any advancements Russia makes, it will share -- like its nuclear technology -- but whether or not that would happen in times of war is another question. Due to their already-existing investment in autonomous technologies, then, it is dubious that a ban on autonomous weapons would have any effect, because it seems to have already lost the potential for their cooperation.