Google Simulates Brain Capable Of Detecting Cats On YouTube

Google AI Brain Finds Cats On YouTube
Image Source: Jim Wilson/The New York Times

 Artificial Intelligence
Google researchers say they are getting computers to simulate the learning process of the human brain as one of the projects for researchers in its X Lab. Computers programmed with algorithms intended to mimic neural connections "learned" to recognize cats after being shown a sampling of YouTube videos.
Inside the GoogleX secret lab, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

Now Google researchers have created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which searches the internet to learn on its own.

The neural network taught itself to recognise cats from YouTube images, which is was no small feat. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland, and the paper has been made available online.

"We never told it during the training, 'This is a cat,' " Dr. Jeff Dean, who originally helped Google design the software that lets it easily break programs into many tasks that can be computed simultaneously. "It basically invented the concept of a cat. We probably have other ones that are side views of cats."

The "Google Brain" assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images.  Such a process is known as deep learning. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain's visual cortex.


One of the neurons in the resercher's artificial neural network, trained from still frames from unlabeled YouTube videos, learned to detect cats.

While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Andrew Ng said he was cautious about drawing parallels between his software system and biological life.

"A loose and frankly awful analogy is that our numerical parameters correspond to synapses," said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.

"The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts," said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: "The scale of modeling the full human visual cortex may be within reach before the end of the decade."

Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company's search business and related services. Potential applications include improvements to image search, speech recognition and machine language translation.

Despite their success, the Google researchers remained cautious about whether they had hit upon the holy grail of machines that can teach themselves.

"It'd be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don't quite have the right algorithm yet," said Dr. Ng


Acoording to Dr. Ng and Dr. Dean,
We’re actively working on scaling our systems to train even larger models. To give you a sense of what we mean by “larger”—while there’s no accepted way to compare artificial neural networks to biological brains, as a very rough comparison an adult human brain has around 100 trillion connections. So we still have lots of room to grow.

And this isn’t just about images—we’re actively working with other groups within Google on applying this artificial neural network approach to other areas such as speech recognition and natural language modeling. Someday this could make the tools you use every day work better, faster and smarter.






SOURCES  SMH.com.au, New York Times

By 33rd SquareSubscribe to 33rd Square


Comments

Popular Posts