LEADING ARTIFICIAL-INTELLIGENCE RESEARCHERS gathered recently for the prestigious Neural Information Processing Systems conference have a new topic on their agenda
Industrial robots have existed since the 1960s, when the first Unimate robotic arm was installed at a General Motors plant in the United States. Nearly six decades on, why don’t we have capable robots in our homes, beyond a few simple domestic gadgets?
The movie portrays a brutal future. A military firm unveils a tiny drone that hunts and kills with ruthless efficiency. But when the technology falls into the wrong hands, no one is safe.
Automated driving is a powerful tool to reduce traffic fatalities, but societies still grapple with moral challenges as illustrated by the trolley dilemma.
HUMANITY IS ENTERING a confusing time: the Age of the Bizarrely Intelligent Robots. No longer confined to cages on factory floors, the machines are more and more walking, rolling, and hopping among us.
We are not used to the idea of machines making ethical decisions, but the day when they will routinely do this – by themselves – is fast approaching. So how, asks the BBC’s David Edmonds, will we teach them to do the right thing?
Recently, the “trolley problem,” a decades-old thought experiment in moral philosophy, has been enjoying a second career, appearing in nightmare visions of a future in which cars make life-and-death decisions for us.
Scientists who build artificial intelligence and autonomous systems need a strong ethical understanding of the impact their work could have.
If I were to approach you brandishing a cattle prod, you might at first be amused. But, if I continued my advance with a fixed maniacal grin, you would probably retreat in shock, bewilderment and anger. As electrode meets flesh, I would expect a violent recoil.
As self-driving cars roar (silently, on electric engines) towards wide scale use, one team is trying to answer a very difficult question: when accidents inevitably happen, where should the computer look to for morality and ethics?
Self-driving cars will soon be able to make ‘life or death’ judgements…. such as whether to hit an animal rather than a pedestrian. The increase likelihood of driverless vehicles has raised questions such as whether they will be capable of ethical decisions, just like humans.