Today’s AI-driven revolution is coming so fast that we have trouble even imagining how it will turn out. Autonomous weapons like MQ-9 Reaper Drone is already deployed and Modular Advanced Armed Robotic System ( MAARS) from QinetiQ is under active development. Many companies like Google, Tesla, and Universal Robotics are calling for UN to ban killer robots. Representatives from AI and Robotics industry for the first time raised their voice through the letter presented at the International Joint Conference on Artificial Intelligence (IJCAI 2017 ) in Melbourne, Australia. A meeting scheduled for August 21- 25 between a group of government experts (GGEs) to discuss LAWS was canceled, as some states failed to pay their financial contributions to the UN. Though another date was rescheduled for November 13-17.
Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia, organized the letter. The letter has been signed by over 20000 which includes AI and robotics researchers and others like Stephen Hawking, Steve Jobs, and Noam Chomsky. Other prominent names to sign the open letter to the UN asking for ban on lethal autonomous weapons systems (LAWS) or casually called “killer robots” include Tesla CEO Elon Musk, Google Deepmind Founder Mustafa Suleyman, and Universal Robotics Founder Esben Østergaard are among the 116 robotics and artificial intelligence founders and experts.
Part of the contents of the letter –
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.
Roger Cabiness, a Pentagon spokesperson, speaking to Wired said an outright ban on LAWs is impractical. Whereas in the same article Rebecca Crootof, a researcher at Yale Law School, encouraged regulation over an outright ban. “International laws such as the Geneva Convention that restrict the activities of human soldiers could be adapted to govern what robot soldiers can do on the battlefield, for example. Other regulations short of a ban could try to clear up the murky question of who is held legally accountable when a piece of software makes a bad decision, for example by killing civilians.”
The battle for supremacy between prominent killer robots and the pace of innovation around AI and Robotics will continue to raise concerns over safety and security of human lives. We all will have to make a decision how we choose to use AI and Robotics, for human development or killer robots.
Do Share your thoughts about the future of AI and Robotics in the comments.