ISAAC ASIMOV’S FAMOUS Three Laws of Robotics—constraints on the behavior of androids and automatons meant to ensure the safety of humans—were also famously incomplete. The laws, which first appeared in his 1942 short story “Runaround” and again in classic works like I, Robot, sound airtight at first:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, hidden conflicts and loopholes abound (which was Asimov’s point). Defining and implementing an airtight set of ethics for artificial intelligence has become a pressing concern.