The dream of in-ear real-time translation goes back at least as far as Douglas Adams’s Babel Fish, a little alien that fits in a human ear, feeds on brain waves and, miraculously, excretes translations into the ear canal.
A team of engineers at the University of Antwerp in Belgium has developed a 3D-printed robotic arm that can act as a sign language translator for deaf people.
There have been some drastic changes in the way that people are able to lead their lives over the past decade. The internet has connected people across the globe in ways that we never previously thought possible.
ARAB newspapers have a reputation for tamely taking the official line. On any given day, you might read that “a source close to the Iranian Foreign Ministry told Al-Hayat that ‘Tehran will continue to abide by the terms of the nuclear agreement as long as the other side does the same.’”
Human translators will face off against artificially intelligent (AI) machine translators next week in Seoul, South Korea. The competition, sponsored by Sejong Cyber University and the International Interpretation and Translation Association (IITA) will pit human translators against Google Translate and Naver Papago.
I’M SORRY, Dave. I’m afraid I can’t do that.” With chilling calm, HAL 9000, the on-board computer in “2001: A Space Odyssey”, refuses to open the doors to Dave Bowman, an astronaut who had ventured outside the ship. HAL’s decision to turn on his human companion reflected a wave of fear about intelligent computers.
Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved.
The gap between human and machine translators could be narrowing as researchers find a new way to improve the learning capabilities of Google Translate’s neural network.
On the same day that Google announced its translation services were now operating with its Neural Machine Translation (NMT) system, a team of researchers released a paper on arXiv showing how its NMT could be pushed one step further.
AI-POWERED VOICE INTERFACES have given rise to a new class of devices competing to supplant our phones. There are friendly wireless speakers in our kitchens and helpful digital passengers in our cars. Among the most promising are in-ear computers—wireless earbuds that operate as always-on, voice-controlled Internet communication gateways.
Over the past four years, readers have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies. Most obviously, the speech-recognition functions on our smartphones work much better than they used to.
In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google.