AlphaGo, the board-game-playing AI from Google’s DeepMind subsidiary, is one of the most famous examples of deep learning – machine learning using neural networks – to date.

So it may be surprising that some of the code that led to victory was created by good old-fashioned humans.

The software, which beat Korean Go Champion Lee Sedol 4–1 in March, taught itself to play the ancient Asian game by running millions of simulations against itself.

According to Thore Graepel, research lead at DeepMind, AlphaGo’s finished system was very good at working out what areas of the board to focus its thinking on, but not so good at working out when to stop thinking and actually play a move.

Read more: AlphaGo taught itself how to win, but without humans it would have run out of time | Technology | The Guardian

Don’t forget to share this via , Google+, Pinterest, LinkedIn, Buffer, , Tumblr, Reddit, StumbleUpon and Delicious.

Published by Mike Rawson

Mike Rawson has recently re-awoken a long-standing interest in robots and our automated future. He lives in London with a single android - a temperamental vacuum cleaner - but is looking forward to getting more cyborgs soon.

Leave a comment

Your email address will not be published. Required fields are marked *

AlphaGo taught itself how to win, but without humans it would have run…

by Mike Rawson time to read: 1 min
Hi there - can I help you with anything?
[Subscribe here]
More in Man v Robot, News
Don't fear AI
This is why your fears about artificial intelligence are wrong – Recode

Artificial intelligence will take over the world! Or so we’re told by the movies. We’re all doomed to become "house...