AI Safety – Google Research Blog

We believe that AI technologies are likely to be overwhelmingly useful and beneficial for humanity. But part of being a responsible steward of any new technology is thinking through potential challenges and how best to address any associated risks. So today we’re publishing a technical paper, Concrete Problems in AI Safety, a collaboration among scientists at Google, OpenAI, Stanford and Berkeley.

While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative. We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.

We’ve outlined five problems we think will be very important as we apply AI in more general circumstances. These are all forward thinking, long-term research questions — minor issues today, but important to address for future systems.

Read More: Research Blog

Don’t forget to share this via , , Google+, Pinterest, LinkedIn, Buffer, , Tumblr, Reddit, StumbleUpon and Delicious.

Mike Rawson

Mike Rawson has recently re-awoken a long-standing interest in robots and our automated future. He lives in London with a single android - a temperamental vacuum cleaner - but is looking forward to getting more cyborgs soon.

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Safety – Google Research Blog

by Mike Rawson time to read: 1 min
Hi there - can I help you with anything?
[Subscribe here]
 
More in Machine Learning, News
March of the Machines
March of the machines | The Economist

EXPERTS warn that “the substitution of machinery for human labour” may “render the population redundant”. They worry that “the discovery...

Close