AI Has a Hallucination Problem That’s Proving Tough to Fix

TECH COMPANIES ARE rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers.

Read more: AI Has a Hallucination Problem That’s Proving Tough to Fix

Don’t forget to share this via , , Google+, Pinterest, LinkedIn, Buffer, , Tumblr, Reddit, StumbleUpon and Delicious.

Mike Rawson

Mike Rawson has recently re-awoken a long-standing interest in robots and our automated future. He lives in London with a single android - a temperamental vacuum cleaner - but is looking forward to getting more cyborgs soon.

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Has a Hallucination Problem That’s Proving Tough to Fix

by Mike Rawson time to read: 1 min
Hi there - can I help you with anything?
[Subscribe here]
 
More in Machine Learning, News
AR glasses for cops
Face Recognition Glasses Augment China’s Railway Cops

Railway police in Zhengzhou, the capital of central China’s Henan province, are the first in the country to start using...

Close