From Turing to Watson
Larry H Bernstein, MD, FCAP, Curator
LPBI
From Turing to Watson: The Long-Burning Hype of Machine Learning
Thomas Slowe, Founder & CEO, Nervve
http://www.rdmag.com/articles/2015/10/turing-watson-long-burning-hype-machine-learning?
http://www.rdmag.com/sites/rdmag.com/files/MachineLearning_Infographicx500.jpg
Each year, technology industry watchers anxiously await the release of Gartner’s Hype Cycle to see what’s rising, what’s falling and what’s completely fizzled when it comes to emerging technologies. Many observers specifically look for the “peak of inflated expectations” to see which technologies have hit their high point when it comes to media saturation, but still need several years before reaching their true potential. While there are many well-known categories on this year’s list, including 3-D printing, virtual reality and wearables, they’re joined by one that comes with a bit more mystery—machine learning.
For people in technology, the definition of machine learning is relatively simple. Machine learning is a form of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data.
However, if you ask someone on the street, you’re likely to get a blank stare. Some may be able to connect the dots to AI, or perhaps they can reference the technology’s most famous example to date, IBM Watson, thanks to its ability to defeat Jeopardy champions. In reality, machine learning plays a larger role in our everyday lives than people realize. From Siri and Cortana, to our Bluetooth car entertainment systems, to visual technology that can be trained to spot a specific piece of clothing on fuzzy surveillance video, these automated technologies many take for granted have been developed thanks to a machine’s ability to “learn” from prior interactions.
While it may seem machine learning has developed overnight thanks to huge investments and breakthroughs from major players in Silicon Valley, it has, in fact, gone through a long and meandering history over the past 80 plus years. It has morphed from the research focus of a few dozen engineers to the power behind some of today’s most widespread consumer technologies.
In the beginning
While AT&T Bell Labs’ development of the electronic speech synthesizer in 1936 may have been the first major breakthrough for machine learning, the more well-known achievement from this early era was the Turing Test in 1950. Alan Turing introduced the test in a paper he opened with the simple words “I propose to consider the question, ‘Can machines think?’” By proving that humans couldn’t always differentiate between a real person and a machine in basic textual conversations, Turing laid the groundwork for societal acceptance of the concept that machines can learn.
Major advancements in early machine learning weren’t limited to solely voice and text recognition. From the Rosenblatt Preceptron in 1957, to Larry Roberts Computer Visions dissertation in 1963, image also played a major role in molding future research within the artificial intelligence community.
Relatable examples
So when did machine learning truly evolve into a real-world technology? A few major breakthroughs occurred in the 1970s that made the concept much more relatable. In 1976, a license plate recognition system was invented in the U.K. at the Police Scientific Development Branch. While license plate software didn’t become more widely used until a couple of decades later, this type of computer vision technology was the basis for the more high-tech versions used by law enforcement and intelligence agencies today.
Machine learning and robotics collided when the Stanford Cart successfully crossed a chair-filled room without human intervention in 1979. The process took more than five hours, thanks to many pauses so the cart could process what it was seeing and plan a new route. While Google’s self-driving cars certainly don’t take that long to navigate the streets of San Francisco or Austin, machine learning technology brings us closer than ever before to realizing technology that was previously only imagined in movies.
The modern era
The past two decades of AI and machine learning have seen the technology grow. The power of neural networks was initially appreciated in mid-1980s, but computers were too weak to tap the value at a practical level. Geoffrey Hinton revived interest in neural networks in 2006 under the name of “Deep Learning” when he demonstrated a system that learned to classify handwritten digits with high accuracy. Even more impressive was the system’s ability to generate novel handwritten examples that it hadn’t been shown before.

This work started a wave of major Silicon Valley investment in the technology that now powers many of the consumer-focused technology we see today. From virtual assistants like Siri and Cortana, to virtual online chat agents and even to our GPS systems, machine learning-based technology influences almost every aspect of our daily interactions.However, one legacy company has made an outstanding and lasting impact when it comes to how the public perceives the topic of machine technology—IBM. From Deep Blue’s defeat of world chess champion Garry Kasparov in 1997, to Watson’s take down of legendary Ken Jennings on Jeopardy in 2011, one could argue IBM has done more to raise public awareness of a machine’s ability to learn than any other company. Watson’s achievements were particularly impressive, because it wasn’t just about its deep knowledge of questions and answers, but the ability to use natural language processing to understand Alex Trebek’s questions and signal its answer more quickly than humans.
The future
As we look into the future of machine learning, some may have visions of machine overlords eventually taking over the world. However, we believe we’re still several decades away from contemplating this scenario. What we can choose to focus on instead is where the technology will generate value in the next five to 10 years.
While entirely autonomous machines are unfeasible in the near future, “person-in-the-loop” approaches have shown enormous value. From digital services like learning how to beat video games, to physical world applications (like visually identifying possible infrastructure failures and helping doctors perform complicated surgeries), the societal possibilities for machine learning are endless. And when in doubt just ask Siri!
• CONFERENCE AGENDA ANNOUNCED:
The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason. Learn more.
Leave a Reply