Feeds:
Posts
Comments

Archive for the ‘Natural Language Processing (NLP)’ Category

Deep Reinforcement Learning Machine Has Taught Itself to Play Chess at Higher Levels

Reporter: Aviva Lev-Ari, PhD, RN

 

 

 

 

 

“Chess, after all, is special; it requires creativity and advanced reasoning. No computer could match humans at chess.” That was a likely argument before IBM surprised the world about computers playing chess. In 1997, Deep Blue’s entry won the World Chess Champion, Garry Kasparov.

 

Matthew Lai records the rest: “In the ensuing two decades, both computer hardware and AI research advanced the state-of-art chess-playing computers to the point where even the best humans today have no realistic chance of defeating a modern chess engine running on a smartphone.”

 

Now Lai has another surprise. His report on how a computer can teach itself chess—and not in the conventional way—is on arXiv. The title of the paper is “Giraffe: Using Deep Reinforcement Learning to Play Chess.” Departing from the conventional method of teaching computers how to play chess by giving them hardcoded rules, this project set out to use machine learning to figure out how to play chess. Namely, he said that deep learning was applied to chess in his work. “We use deep networks to evaluate positions, decide which branches to search, and order moves.”

 

As for other chess engines, Lai wrote, “almost all chess engines in existence today (and all of the top contenders) implement largely the same algorithms. They are all based on the idea of the fixed-depth minimax algorithm first developed by John von Neumann in 1928, and adapted for the problem of chess by Claude E. Shannon in 1950.”

 

This Giraffe is a chess engine using self-play to discover all its domain-specific knowledge. “Minimal hand-crafted knowledge is given by the programmer,” he said.

 

Results? Lai said ,”The results showed that the learned system performs at least comparably to the best expert-designed counterparts in existence today, many of which have been fine tuned over the course of decades.”

 

OK, not at super-Grandmaster levels, but impressive enough. “With all our enhancements, Giraffe is able to play at the level of an FIDE [Fédération Internationale des Échecs, or World Chess Federation] International Master on a modern mainstream PC,” he stated. “While that is still a long way away from the top engines today that play at super-Grandmaster levels, it is able to defeat many lower-tier engines, most of which search an order of magnitude faster.”

 

Addressing the value of Lai’s work in this paper, MIT Technology Review, stated that, “In a world first, an artificial intelligence machine plays chess by evaluating the board rather than using brute force to work out every possible move.” Giraffe, said the review, taught itself to play chess by evaluating positions much more like humans.

 

Sourced through Scoop.it from: techxplore.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

Inside Facebook’s Quest for Software That Understands You | MIT Technology Review

Reporter: Aviva Lev-Ari, PhD, RN

 

A reincarnation of one of the oldest ideas in artificial intelligence could finally make it possible to truly converse with our computers. And Facebook has a chance to make it happen first.

Sourced through Scoop.it from: www.technologyreview.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

With deep learning and dimensionality reduction, we can visualize the entirety of Wikipedia?

Reporter: Aviva Lev-Ari, PhD, RN

Deep neural networks are an approach to machine learning that has revolutionized computer vision and speech recognition in the last few years, blowing the previous state of the art results out of the water. They’ve also brought promising results to many other areas, including language understanding and machine translation. Despite this, it remains challenging to understand what, exactly, these networks are doing.

 

Understanding neural networks is just scratching the surface, however, because understanding the network is fundamentally tied to understanding the data it operates on. The combination of neural networks and dimensionality reduction turns out to be a very interesting tool for visualizing high-dimensional data – a much more powerful tool than dimensionality reduction on its own.

 

Paragraph vectors, introduced by Le & Mikolov (2014), are vectors that represent chunks of text. Paragraph vectors come in a few variations but the simplest one, which we are using here, is basically some really nice features on top of a bag of words representation.

 

With word embeddings, we learn vectors in order to solve a language task involving the word. With paragraph vectors, we learn vectors in order to predict which words are in a paragraph.

 

Concretely, the neural network learns a low-dimensional approximation of word statistics for different paragraphs. In the hidden representation of this neural network, we get vectors representing each paragraph. These vectors have nice properties, in particular that similar paragraphs are close together.

 

Now, Google has some pretty awesome people. Andrew Dai, Quoc Le, and Greg Corrado decided to create paragraph vectors for some very interesting data sets. One of those was Wikipedia, creating a vector for every English Wikipedia article. The result is that we get a visualization of the entirety of Wikipedia. A map of Wikipedia. A large fraction of Wikipedia’s articles fall into a few broad topics: sports, music (songs and albums), films, species, and science.

Sourced through Scoop.it from: colah.github.io

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

Machine-Learning Supercomputer Woven from Idle Computers to Rival Google in Power

Reporter: Aviva Lev-Ari, PhD, RN

 

 

Sentient claims to have assembled machine-learning muscle to rival Google by rounding up idle computers.

 

Recent improvements in speech and image recognition have come as companies such as Google build bigger, more powerful systems of computers to run machine-learning software. Now a relative minnow, a private company called Sentient with only about 70 employees, says it can cheaply assemble even larger computing systems to power artificial-intelligence software. The company’s approach may not be suited to all types of machine learning, a technology that has uses as varied as facial recognition and financial trading. Sentient has not published details, but says it has shown that it can put together enough computing power to produce significant results in some cases.

 

Sentient’s power comes from linking up hundreds of thousands of computers over the Internet to work together as if they were a single machine. The company won’t say exactly where all the machines it taps into are. But many are idle inside data centers, the warehouse-like facilities that power Internet services such as websites and mobile apps, says Babak Hodjat, cofounder and chief scientist at Sentient. The company pays a data-center operator to make use of its spare machines.

 

Data centers often have significant numbers of idle machines because they are built to handle surges in demand, such as a rush of sales on Black Friday. Sentient has created software that connects machines in different places over the Internet and puts them to work running machine-learning software as if they were one very powerful computer. That software is designed to keep data encrypted as much as possible so that what Sentient is working on–perhaps for a client–is kept confidential.

 

Sentient can get up to one million processor cores working together on the same problem for months at a time, says Adam Beberg, principal architect for distributed computing at the company. Google’s biggest machine-learning systems don’t reach that scale, he says. A Google spokesman declined to share details of the company’s infrastructure and noted that results obtained using machine learning are more important than the scale of the computer system behind it. Google uses machine learning widely, in areas such as search, speech recognition and ad targeting.

 

Beberg helped pioneer the idea of linking up computers in different places to work together on a problem (see “Innovators Under 35: 1999”). He was a founder of Distributed.net, a project that was one of the first to demonstrate that idea at large scale. Its technology led to efforts such as Seti@Home andFolding@Home, in which millions of people installed software so their PCs could help search for alien life or contribute to molecular biology research.

 

Sentient was founded in 2007 and has received over $140 million in investment funding, with just over $100 million of that received late last year. The company has so far focused on using its technology to power a machine-learning technique known as evolutionary algorithms. That involves “breeding” a solution to a problem from an initial population of many slightly different algorithms. The best performers of the first generation are used to form the basis of the next, and over successive generations the solutions get better and better.

 

Sentient currently earns some revenue from operating financial-trading algorithms created by running its evolutionary process for months at a time on hundreds of thousands of processors. But the company now plans to use its infrastructure to offer services targeted at industries such as health care or online commerce, says Hodjat.

 

Sourced through Scoop.it from: www.technologyreview.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

Google’s fact-checking bots are automatically building the Knowledge Vault for access to the world’s facts

Reporter: Aviva Lev-Ari, PhD, RN

 

 

 

 

The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world’s facts

GOOGLE is building the largest store of knowledge in human history – and it’s doing so without any human help.

 

Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

 

The breadth and accuracy of this gathered knowledge is already becoming the foundation of systems that allow robots and smartphones to understand what people ask them. It promises to let Google answer questions like an oracle rather than a search engine, and even to turn a new lens on human history.

 

Knowledge Vault is a type of “knowledge base” – a system that stores information so that machines as well as people can read it. Where a database deals with numbers, a knowledge base deals with facts. When you type “Where was Madonna born” into Google, for example, the place given is pulled from Google’s existing knowledge base.

 

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far.

 

So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

 

Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.

 

“It’s a hugely impressive thing that they are pulling off,” says Fabian Suchanek, a data scientist at Télécom ParisTech in France. Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.

 

Knowledge Vault offers Google fast, automatic expansion of its knowledge – and it’s only going to get bigger. As well as the ability to analyse text on a webpage for facts to feed its knowledge base, Google can also peer under the surface of the web, hunting for hidden sources of data such as the figures that feed Amazon product pages, for example.

 

Tom Austin, a technology analyst at Gartner in Boston, says that the world’s biggest technology companies are racing to build similar vaults. “Google, Microsoft, Facebook, Amazon and IBM are all building them, and they’re tackling these enormous problems that we would never even have thought of trying 10 years ago,” he says.

Source: www.newscientist.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

« Newer Posts