Renewed attention over artificial intelligence for use in warfare (“killer robots”) is being brought to the forefront with the recent Convention on Certain Conventional Weapons (CCW) held this past week in Geneva, Switzerland.
'The meeting coincidentally comes just one day after a Dutch NGO, Pax for Peace, released a report about Big Tech’s involvement with supplying A.I. systems to the military. Their report, “Don’t be evil?” (PDF), examined 50 companies across the world that are involved. They ranked 21 of these companies as “high concern” with Microsoft and Amazon leading the way. Given the title of the document, they concluded that Google had done enough to warrant a lower ranking of “Best Practice” after they responded to public and employee outcry over the Project Maven A.I. drone program and subsequent refusal to be involved with the JEDI military cloud program.
Microsoft, however, has made it quite clear in the past that they unapologetically support the U.S. military. In December last year I reported that “Microsoft Declares Intent To Provide Military All Of Its Technology Because of the Military’s ‘Ethical and Honorable Tradition.'”
Microsoft is “going to provide the U.S. military with access to the best technology … all the technology we create. Full stop,” Brad Smith said Saturday during a panel at the Reagan National Defense Forum at the Ronald Reagan Presidential Library in Simi Valley. – LA Times
Ethics and honor are exactly at the heart of the warnings that I and others have been raising since the public became aware of the existence of lethal autonomous weapons systems. The Convention has been gathering since 2014 to specifically discuss this type of warfare. Over the ensuing years, many more countries are seeing the dangers that the world faces and 29 have now joined an effort to ban these systems outright. The U.S. and Russia, however, have been staunchly resistant to making any firm declarations, preferring a “wait and see” attitude.
Russia and United States repeatedly rejected any references in the meeting’s final report on the need for “human control” over the use of force. Both states are investing significant funds to develop weapons systems with decreasing human control over the critical functions of selecting and engaging targets.
During the early morning negotiation of the final report, Russia said it is “premature” to discuss the potential dangers of lethal autonomous weapons systems “until they’re produced.” It also argued that autonomy is not a characteristic or central feature of lethal autonomous weapons systems. — Pressenza
The U.S. does in fact continue to roll out these systems with recent announcements that I’ve covered including facial recognition goggles for soldiers that will connect to remote weapons for quick identification and targeting, as well as A.I. missiles that cannot be interfered with once they are launched. This of course joins the various robots and drones that already are in use by the military.'
Read more: Big Tech, Russia and U.S. Under Fire for A.I. Weapons Programs; Campaign to Stop Killer Robots Coalition Doubles in Size