iRobot: AI robot developed that can choose to inflict pain
Posted by Charley Millard | 15.09.2018
Artificial intelligence, how will it affect us in the future? Will it enable the automation of mundane tasks or is it something we should be seriously concerned about? In this post we look at an example of a thinking Robot which can chose to cause harm to a person. Created simply to illustrate a point it does raise in interesting issue.
“1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Readers of sci-fi author Isaac Asimov will be familiar with the above statement. It’s the First Law from Asimov’s fictional Three Laws of Robotics that all robots are programmed to live by and obey. The laws also feature in Will Smith’s blockbuster film ‘iRobot’ which is based on Asimov’s writing.
In Asimov’s robot-centric vision of the future, these laws are pre-programmed into every robot and cannot be bypassed. They are the fundamental make-up of all artificial intelligence and every robot is bound to follow them.
Now, it seems that fiction is beginning to make its way into fact. Roboticist Alexander Reben from the University of Berkley, California, has developed a robot that is capable of pricking a person’s finger to cause them pain, but it’s programmed to choose not to do so.
Ominously nicknamed ‘The First Law’, the robot doesn’t simply prick the finger of every person who uses it. Instead, it sometimes refrains from causing pain in order to avoid getting switched off. The decision on whether or not to inflict pain is entirely down to the robot, with no human intervention.
Watch ‘The First Law’ of artificial intelligence video
This robot was designed to purposefully break Asimov’s First Law. It was built as the catalyst to spark an ethical debate about the future of artificial intelligence and the potential devastating impact it could have on the human race if left unchecked.
Reben’s concerns seem to be echoed by renowned physicist Stephen Hawking, who said: “The real risk with AI isn’t malice, but competence. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.” [Source]
Hawking has also warned about the potential for intelligent machines to develop a survival instinct in order to ensure it completes the goals that it has been programmed to accomplish. He says that “surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.”
It’s all pretty scary stuff. Reben hopes that the fallout from his design may give credence to the AI ‘kill switch’ currently being developed by DeepMind, Google’s AI research company. The idea is to create a way for humans to have the ‘upper hand’ in the event that AI robots one day learn to override their own ‘power off’ buttons.
Patrick Moorhead, an analyst from Moor Insights & Strategy, stated: “The timing is right for this to be discussed as the architectures for A.I. and autonomous machines are being laid right now. It would be like designing a car and only afterwards creating the ABS and braking system. The kill switch needs to be designed into the overall system. Otherwise, it is open to security issues and maybe even the machines trying to circumvent the kill. […] We should be concerned about A.I. systems with no kill switch. It would be like creating a bullet train without brakes.”
[Source]
Stephen Hawking even went so far as to say that “the development of full artificial intelligence could spell the end of the human race”.
Whilst that’s perhaps a long way from a finger-pricking robot, we think you’ll agree: that’s a rather sobering thought.