2014 in Computing: Breakthroughs in Artificial Intelligence
The past year saw progress in developing hardware and software capable of human feats of intelligence.
- 8 comments
The holy grail of artificial intelligence—creating software that comes close to mimicking human intelligence—remains far off. But 2014 saw major strides in machine learning software that can gain abilities from experience. Companies in sectors from biotech to computing turned to these new techniques to solve tough problems or develop new products.
The most striking research results in AI came from the field of deep learning, which involves using crude simulated neurons to process data.
Work in deep learning often focuses on images, which are easy for humans to understand but very difficult for software to decipher. Researchers at Facebook used that approach to make a system that can tell almost as well as a human whether two different photos depict the same person. Google showed off a system that can describe scenes using short sentences.
Results like these have led leading computing companies to compete fiercely for AI researchers. Google paid more than $600 million for a machine learning startup called DeepMind at the start of the year. When MIT Technology Review caught up with the company’s founder, Demis Hassabis, later in the year, he explained how DeepMind’s work was shaped by groundbreaking research into the human brain.
The search company Baidu, nicknamed “China’s Google,” also spent big on artificial intelligence. It set up a lab in Silicon Valley to expand its existing research into deep learning, and to compete with Google and others for talent. Stanford AI researcher and onetime Google collaborator Andrew Ng was hired to lead that effort. In our feature-length profile, he explained how artificial intelligence could turn people who have never been on the Web into users of Baidu’s Web search and other services.
Machine learning was also a source of new products this year from computing giants, small startups, and companies outside the computer industry.
Microsoft drew on its research into speech recognition and language comprehension to create its virtual assistant Cortana, which is built into the mobile version of Windows. The app tries to enter a back-and-forth dialogue with people. That’s intended both to make it more endearing and to help it learn what went wrong when it makes a mistake.
Startups launched products that used machine learning for tasks as varied as helping you get pregnant, letting you control home appliances with your voice, and making plans via text message .
Some of the most interesting applications of artificial intelligence came in health care. IBM is now close to seeing a version of its Jeopardy!-winning Watson software help cancer doctors use genomic data to choose personalized treatment plans for patients . Applying machine learning to a genetic database enabled one biotech company to invent a noninvasive test that prevents unnecessary surgery.
Using artificial intelligence techniques on genetic data is likely to get a lot more common now that Google, Amazon, and other large computing companies are getting into the business of storing digitized genomes.
However, the most advanced machine learning software must be trained with large data sets, something that is very energy intensive, even for companies with sophisticated infrastructure. That’s motivating work on a new type of “neuromorphic” chips modeled loosely on ideas from neuroscience. Those chips can run machine learning algorithms more efficiently.
This year, IBM began producing a prototype brain-inspired chip it says could be used in large numbers to build a kind of supercomputer specialized for learning. A more compact neuromorphic chip, developed by General Motors and the Boeing-owned research lab HRL, took flight in a tiny drone aircraft.
All this rapid progress in artificial intelligence led some people to ponder the possible downsides and long-term implications of the technology. One software engineer who has since joined Google cautioned that our instincts about privacy must change now that machines can decipher images.
Looking further ahead, biotech and satellite entrepreneur Martine Rothblatt predicted that our personal data could be used to create intelligent digital doppelgangers with a kind of life of their own. And neuroscientist Christof Koch, chief scientific officer of the Allen Institute for Brain Science in Seattle, warned that although intelligent software could never be conscious, it could still harm us if not designed correctly.
Meanwhile, a more benign view of the far future came from science fiction author Greg Egan. In a thoughtful response to the sci-fi movie Her, he suggested that conversational AI companions could make us better at interacting with other humans.
8 comments. Share your thoughts » 0 comments about this story. Start the discussion »