Two recent accidents involving Tesla’s Autopilot system may raise questions about how computer systems based on learning should be validated and investigated when something goes wrong.

A fatal Tesla accident in Florida last month occurred when a Model S controlled by Autopilot crashed into a truck that the automated system failed to spot. Tesla tells drivers to pay attention to the road while using Autopilot, and explains in a disclaimer that the system may struggle in bright sunlight.

Today the National Highway Traffic Safety Administration said it was investigating another accident in Pennsylvania last week where a Model X hit the barriers on both sides of a highway and overturned. The driver said his car was operating in Autopilot mode at the time.

Read more: If a Driverless Car Goes Bad We May Never Know Why

Don’t forget to share this via , Google+, Pinterest, LinkedIn, Buffer, , Tumblr, Reddit, StumbleUpon and Delicious.

Published by Mike Rawson

Mike Rawson has recently re-awoken a long-standing interest in robots and our automated future. He lives in London with a single android - a temperamental vacuum cleaner - but is looking forward to getting more cyborgs soon.

Leave a comment

Your email address will not be published. Required fields are marked *

If a Driverless Car Goes Bad We May Never Know Why

by Mike Rawson time to read: 1 min
Hi there - can I help you with anything?
[Subscribe here]
 
More in Machine Learning, News, Travel
Military drone rules
The queen and her drones | The Economist

ON January 23rd 2009, just three days after his inauguration, President Barack Obama authorised his first drone strikes. The targets...

Close