Autonomous weapons have been deemed highly dangerous and unpredictable, at a recent conference discussing how artificial intelligence might shape the world in the following years and decades.
The annual event, known as the World Economic Forum, took place in Davos, Switzerland between January 19 and 23, and brought together researchers, politicians and business entrepreneurs.
The ultimate purpose of the event was to discuss recent global trends, in order to identify potential challenges and means of addressing them.
One of the topics debated during the conference was the way artificial intelligence can cause irreversible transformations when it comes to military operations and warfare.
For instance, Angela Kane, the United Nations’ High Representative for Disarmament Affairs, argued that the potential risks associated with the use of autonomous weapons are being underestimated by many, who prefer to focus just on benefits, such as a potential reduction in civilian casualties.
According to Kane, some nations have been creating killer robots designed for combat, without fully understanding the repercussions that such inventions might have.
Such developments and innovations have been going on virtually unregulated, and it may already be too late to impose stricter legislation to ensure that such machines don’t fall in the wrong hands, or are used recklessly or irresponsibly.
The same opinion was shared by Stuart Russell, professor of computer science and engineering at the University of California, Berkeley.
As he explained during the panel session, the real danger is not posed by unmanned aerial vehicles (popularly known as drones), because in that case there is still a human being steering the machines, and supervising them every step of the way.
Instead, what’s actually alarming is that battle instruments powered by artificial intelligence are being created. Such autonomous weapons can easily identify targets and strike them without needing prior commands or approvals from their human makers.
Since they don’t require any sort of pilot whatsoever, the robotic devices are completely self-reliant and therefore much more corruptible and whimsical than UAVs.
In fact, they might even have trouble distinguishing civilians from soldiers, or enemies from allies, their decisions being much more unsound or faulty than those made by human beings.
Alan Winfield, mobile robotics researcher at the Bristol Robotics Lab, issued similar opinions, stating that autonomous weapons would surely malfunction under chaotic circumstances, their reactions being completely different from those observed during laboratory experiments.
Last but not least, Roget Carr, chairman of the defense and security firm BAE Systems, argued that in the absence of a human pilot, there would be no way of controlling the warfare robots, and since such machines would be devoid of any conscience or moral sense, they might turn into vicious, merciless killing machines, murdering innocent people indiscriminately.
Fears expressed during the World Economic Forum actually appear to echo those already formulated by SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak and acclaimed theoretical physicist Stephen Hawking.
Back in July 2015, the three, alongside around a thousand other experts in robotics and artificial intelligence, signed an open letter, which was prominently featured at the International Joint Conference on Artificial Intelligence, held in Buenos Aires, Argentina.
The document warned that autonomous weapons have the potential of becoming a fixture of contemporary warfare in the following years, and that this could revolutionize the field just like gunpowder and nuclear arsenals have done in the past.
The petitioners urged lawmakers to prohibit the use of offensive robotic machines, emphasizing that otherwise a competition similar to the nuclear arms race might eventually emerge between nations, and autonomous weapons might even be acquired by extremist groups.
Image Source: Flickr