How to Build a Robot That Won’t Take Over the World

ISAAC ASIMOV’S FAMOUS Three Laws of Robotics—constraints on the behavior of androids and automatons meant to ensure the safety of humans—were also famously incomplete. The laws, which first appeared in his 1942 short story “Runaround” and again in classic works like I, Robot, sound airtight at first:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course, hidden conflicts and loopholes abound (which was Asimov’s point). Defining and implementing an airtight set of ethics for artificial intelligence has become a pressing concern.

Read more: How to Build a Robot That Won’t Take Over the World

Don’t forget to share this via , , Google+, Pinterest, LinkedIn, Buffer, , Tumblr, Reddit, StumbleUpon and Delicious.

Mike Rawson

Mike Rawson has recently re-awoken a long-standing interest in robots and our automated future.

He lives in London with a single android – a temperamental vacuum cleaner – but is looking forward to getting more cyborgs soon.

Leave a Reply

Your email address will not be published. Required fields are marked *

How to Build a Robot That Won’t Take Over the World

by Mike Rawson time to read: 1 min
Hi there - can I help you with anything?
[Subscribe here]
 
More in Man v Robot, News
VR for pain
Opioids Haven’t Solved Chronic Pain. Maybe Virtual Reality Can

WHEN SOMEONE WALKS or rolls into the emergency department at Cedars-Sinai hospital in Los Angeles with food stuck in their throat,...

Close