Advanced computer systems have enhanced their independent decision-making capabilities. Examples include spam filters which can automatically block unwanted or fraudulent emails.
Some medical facilities in the US are already using cameras powered by Artificial Intelligence (AI) to identify diabetic patients at risk of losing their eyesight.
Can machines be relied upon to decide who to kill?
This pertinent question was discussed in the eighth episode of the Sleepwalkers podcast. The recent advances in AI-powered technology has worried some military experts about a new generation of autonomous lethal weapons which could take independent but questionable actions.
Paul Scharre, Director of Technology and National Security at the US-based think tank Centre for a New American Security, expressed that we are entering an age where “machines may be making some of the most important decisions on the battlefield about who lives and dies“.
Israel, China, India and South Korea are already using the Israeli-origin Harpy drone which can automatically identify enemy radars and target them without requiring human intervention.
America’s rivals China and Russia have already emplaced AI at the centre of their future warfighting strategies.
Former head of US Defence Advanced Research Projects Agency (DARPA) Arati Prabhakar explained the limitations of existing AI technology. She quoted the real example of a software developed by Stanford which was asked to describe contents of a photo showing a baby holding an electric toothbrush. The machine described it as a “small boy with a baseball bat” (grossly inaccurate image-recognition algorithms).
Prabhakar said that from an ethical point of view, those harnessing such technologies should also discuss the potential shortcomings and pitfalls.
Former US Navy Secretary Richard Danzig urged international cooperation to have a “common understanding” regarding the risks of uncontrolled AI as also “joint planning for the contingency that these (risks) do escape“.