Feeds:
Posts
Comments

Archive for the ‘Deep Learning’ Category

Machines are becoming more creative than humans

Reporter: Aviva Lev-Ari, PhD, RN

 

Can machines be creative? Recent successes in AI have shown that machines can now perform at human levels in many tasks that, just a few years ago, were considered to be decades away, like driving cars, understanding spoken language, and recognizing objects. But these are all tasks where we know what needs to be done, and the machine is just imitating us. What about tasks where the right answers are not known? Can machines be programmed to find solutions on their own, and perhaps even come up with creative solutions that humans would find difficult?

 

The answer is a definite yes! There are branches of AI focused precisely on this challenge, including evolutionary computation and reinforcement learning. Like the popular deep learning methods, which are responsible for many of the recent AI successes, these branches of AI have benefitted from the million-fold increase in computing power we’ve seen over the last two decades. There arenow antennas in spacecraft so complex they could only be designed through computational evolution. There are game playing agents in Othello, Backgammon, and most recently in Go that have learned to play at the level of the best humans, and in the case of AlphaGo, even beyond the ability of the best humans. There are non-player characters in Unreal Tournament that have evolved to be indistinguishable from humans, thereby passing the Turing test— at least for game bots. And in finance, there are computational traders in the stock market evolved to make real money.

 

Many new applications have suddenly come within our reach thanks to computational creativity — even though most of us do not realize it yet. If you are facing a design problem where potential solutions can be tested automatically, chances are you could evolve those solutions automatically as well. In areas where computers are already used to draft designs, the natural next step is to harness evolutionary search. This will allow human designers to gain more traction for their ideas, such as machine parts that are easier to manufacture, stock portfolios that minimize risk, or websites that result in more conversions. In other areas, it may take some engineering effort to define the design problem for the computer, but the effort may be rewarded by truly novel designs, such as finless rockets, new video game genres, personalized preventive medicine, and safer and more efficient traffic.

Sourced through Scoop.it from: venturebeat.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

Geoffrey Hinton, the ‘godfather’ of deep learning, on AlphaGo

Reporter: Aviva Lev-Ari, PhD, RN

The scientist who helped develop the neural networks behind Google’s AlphaGo, which beat grandmaster Lee Sedol, on the past, present and future of AI

Sourced through Scoop.it from: www.macleans.ca

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

The superhero of artificial intelligence: can this genius keep it in check?

 

Reporter: Aviva Lev-Ari, PhD, RN

With his company DeepMind, Londoner Demis Hassabis is leading Google’s project to build software more powerful than the human brain. But what will this mean for the future of humankind?

Sourced through Scoop.it from: www.theguardian.com

See on Scoop.itCardiovascular and vascular imaging

Watch Video

https://youtu.be/SUbqykXVx0A

My first encounter with Hassabis was back in the summer of 2014, a few months after the DeepMind acquisition. Since then, I’ve observed him at work in a variety of environments and have interviewed him formally for this profile on three separate occasions over the past eight months. In that time I’ve watched him evolve from Google’s AI genius to a compelling communicator who has found an effective way to describe to non-scientists like me his vastly complex work – about which he is infectiously passionate – and why it matters. Unpretentious and increasingly personable, he is very good at breaking down DeepMind’s approach; namely their combining of old and new AI techniques – such as, in Go, using traditional “tree search” methods for analysing moves with modern “deep neural networks”, which approximate the web of neurons in the brain – and also their methodical “marriage” of different areas of AI research.

SOURCE

https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago

Read Full Post »

Best of 2015: Deep Learning Machine Beats Humans in IQ Test | MIT Technology Review

Reporter: Aviva Lev-Ari, PhD, RN

Computers have never been good at answering the type of verbal reasoning questions found in IQ tests. Now a deep learning machine unveiled in China is changing that. From June …

Sourced through Scoop.it from: www.technologyreview.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

Deep Reinforcement Learning Machine Has Taught Itself to Play Chess at Higher Levels

Reporter: Aviva Lev-Ari, PhD, RN

 

 

 

 

 

“Chess, after all, is special; it requires creativity and advanced reasoning. No computer could match humans at chess.” That was a likely argument before IBM surprised the world about computers playing chess. In 1997, Deep Blue’s entry won the World Chess Champion, Garry Kasparov.

 

Matthew Lai records the rest: “In the ensuing two decades, both computer hardware and AI research advanced the state-of-art chess-playing computers to the point where even the best humans today have no realistic chance of defeating a modern chess engine running on a smartphone.”

 

Now Lai has another surprise. His report on how a computer can teach itself chess—and not in the conventional way—is on arXiv. The title of the paper is “Giraffe: Using Deep Reinforcement Learning to Play Chess.” Departing from the conventional method of teaching computers how to play chess by giving them hardcoded rules, this project set out to use machine learning to figure out how to play chess. Namely, he said that deep learning was applied to chess in his work. “We use deep networks to evaluate positions, decide which branches to search, and order moves.”

 

As for other chess engines, Lai wrote, “almost all chess engines in existence today (and all of the top contenders) implement largely the same algorithms. They are all based on the idea of the fixed-depth minimax algorithm first developed by John von Neumann in 1928, and adapted for the problem of chess by Claude E. Shannon in 1950.”

 

This Giraffe is a chess engine using self-play to discover all its domain-specific knowledge. “Minimal hand-crafted knowledge is given by the programmer,” he said.

 

Results? Lai said ,”The results showed that the learned system performs at least comparably to the best expert-designed counterparts in existence today, many of which have been fine tuned over the course of decades.”

 

OK, not at super-Grandmaster levels, but impressive enough. “With all our enhancements, Giraffe is able to play at the level of an FIDE [Fédération Internationale des Échecs, or World Chess Federation] International Master on a modern mainstream PC,” he stated. “While that is still a long way away from the top engines today that play at super-Grandmaster levels, it is able to defeat many lower-tier engines, most of which search an order of magnitude faster.”

 

Addressing the value of Lai’s work in this paper, MIT Technology Review, stated that, “In a world first, an artificial intelligence machine plays chess by evaluating the board rather than using brute force to work out every possible move.” Giraffe, said the review, taught itself to play chess by evaluating positions much more like humans.

 

Sourced through Scoop.it from: techxplore.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

%d bloggers like this: