Noel Sharkey of the University of Sheffield said that a push toward more robotic technology used in warfare would put civilian life at grave risk.
Technology capable of distinguishing friend from foe reliably was at least 50 years away, he added.
However, he said that for the first time, US forces mentioned resolving such ethical concerns in their plans.
"Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas," Professor Sharkey said at a meeting in London.
"The problem is that this is all based on artificial intelligence, and the military have a strange view of artificial intelligence based on science fiction."
'Odd way'
Professor Sharkey, a professor of artificial intelligence and robotics, has long drawn attention to the psychological distance from the horrors of war that is maintained by operators who pilot unmanned aerial vehicles (UAVs), often from thousands of miles away.
"These guys who are driving them sit there all day...they go home and eat dinner with their families at night," he said.
"It's kind of a very odd way of fighting a war - it's changing the character of war dramatically."
[Read More]
Related Links:
{Related Articles} Military’s killer robots must learn warrior code
Autonomous Military Robotics:
Risk, Ethics, and Design
And
No comments:
Post a Comment