Among all the voices to consider in the debate over what role lethal autonomous capabilities should play in military and security systems, the very people who dream up and create science-fiction realities are the clearest in articulating the risks of robots run amok or even more devastating human-created technological disasters. The latest letter from 116 senior robotics and AI leaders cautions against the use of artificial intelligence in the defense domain, arguing humanity is at a point of no return. “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” they wrote.
The problem is, however, that this is an era when civilian technology innovation outstrips what is conjured up in government labs. The global “AI” revolution is already underway and its impact will certainly shape future conflict. Don’t expect a Terminator reboot. Pandora’s box then may be the last one to be opened, as Facebook, Google, Baidu, Alibaba, Uber and scores of other companies have already lifted the lids on what is possible with learning machine software and robotics because there is generational society-changing and economic potential on the line. So much so that the US wants to block Chinese investment in certain cases in related technologies. As I told RealClear Defense …
August Cole, a senior fellow at the Atlantic Council and writer at the consulting firm Avascent, said the concerns raised by tech leaders on autonomous weapons are valid, but a ban is unrealistic. “Given the proliferation of civilian machine learning and autonomy advances in everything from cars to finance to social media, a prohibition won’t work,” he said.
Setting limits on technology ultimately would hurt the military, which depends on commercial innovations, said Cole. “What needs to develop is an international legal, moral and ethical framework. … But given the unrelenting speed of commercial breakthroughs in AI, robotics and machine learning, this may be a taller order than asking for an outright ban on autonomous weapons.”