Saturday, August 22, 2009

If an autonomous machine kills someone, who is responsible?

The Royal Academy of Engineering has published a report exploring the social, legal and ethical implications of ceding control to autonomous systems

Within a decade, we could be routinely interacting with machines that are truly autonomous – systems that can adapt, learn from their experience and make decisions for themselves. Free from fatigue and emotion, they would perform better than humans in tasks that are dull, dangerous or stressful.

Already, the systems we rely on in our daily lives are being given the capacity to operate autonomously. On the London Underground, Victoria line trains drive themselves between stations, with the human "driver" responsible only for spotting obstacles and closing the doors. Trains on the Copenhagen Metro run without any driver at all. While our cars can't yet drive themselves, more and more functions are being given over to the vehicle, from anti-lock brakes to cruise control. Automatic lighting and temperature control are commonplace in homes and offices.

The areas of human existence in which fully autonomous machines might be useful – and the potential benefits – are almost limitless. Within a decade, robotic surgeons may be able to perform operations much more reliably than any human. Smart homes could keep an eye on elderly people and allow them to be more independent. Self-driving cars could reduce congestion, improve fuel efficiency and minimise the number of road accidents.

But automation can create hazards as well as removing them. How reliable does a robot have to be before we trust it to do a human's job? What happens when something goes wrong? Can a machine be held responsible for its actions?

[Guardian.Co.Uk]

No comments:

Post a Comment