Beyond Deep Learning

Source: VentureBeat, Apr 2017

At the recent AI By The Bay conference, Francois Chollet emphasized that deep learning is simply more powerful pattern recognition than previous statistical and machine learning methods. “The most important problem for AI today is abstraction and reasoning,” explains Chollet, an AI researcher at Google and famed inventor of widely used deep learning library Keras. “Current supervised perception and reinforcement learning algorithms require lots of data, are terrible at planning, and are only doing straightforward pattern recognition.”

What’s beyond deep learning? 

How can we overcome the limitations of deep learning and proceed toward general artificial intelligence? Chollet’s initial plan of attack involves using “super-human pattern recognition, like deep learning, to augment explicit search and formal systems,” starting with the field of mathematical proofs. Automated Theorem Provers (ATPs) typically use brute force search and quickly hit combinatorial explosions in practical use. In the DeepMath project, Chollet and his colleagues used deep learning to assist the proof search process, simulating a mathematician’s intuitions about what lemmas (a subsidiary or intermediate theorem in an argument or proof) might be relevant.

Another approach is to develop more explainable models. In handwriting recognition, neural nets currently need to be trained on tens to hundreds of thousands of examples to perform decent classification. Instead of looking at just pixels, however, Launchbury of DARPA explains that generative models can be taught the strokes behind any given character and can use this physical construction information to disambiguate between similar numbers, such as a 9 or a 4.

Yann LeCun, inventor of convolutional neural networks (CNNs) and director of AI research at Facebook, proposes “energy-based models” as a method of overcoming limits in deep learning. Typically, a neural network is trained to produce a single output, such as an image label or sentence translation. LeCun’s energy-based models instead give an entire set of possible outputs, such as the many ways a sentence could be translated, along with scores for each configuration.

Geoffrey Hinton, widely called the “father of deep learning” wants to replace neurons in neural networks with “capsules” that he believes more accurately reflect the cortical structure in the human mind. “Evolution must have found an efficient way to adapt features that are early in a sensory pathway so that they are more helpful to features that are several stages later in the pathway,” Hinton explains. He hopes that capsule-based neural network architectures will be more resistant to the adversarial attacks that Goodfellow illuminated above.

Perhaps all of these approaches to overcoming the limits of deep learning have truth value. Perhaps none of them do. Only time and continued investment in AI research will tell.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s