AI Machine Learning Breakthrough Is a Twist on Brain Replay


Source: Geralt/Pixabay

Recently, researchers affiliated with the Baylor College of Medicine, the University of Cambridge, the University of Massachusetts Amherst, and Rice University created a new way of adapting a neuroscience concept called “brain replay” to the digital realm of artificial neural networks to enable continuous learning.

From a neuroscience perspective, the concept of brain replay is analogous to a streaming service that activates repeat showings from its vast archives of stored pre-recorded content. The brain can replay memories by reactivating the neural activity patterns that represent prior experiences whether asleep or awake. This ability for memory replay starts in the hippocampus, then continues in the cortex.

The research trio of Hava Siegelmann, Andreas Tolias, and Gido van de Ven published a study in Nature Communications on August 13, 2020 that shows state-of-the-art performance from neural networks by deploying a new twist on mimicking brain replay.

From an educational perspective, there is a wide difference between recalling information and understanding the underlying concepts. The human brain has the remarkable ability to learn by building on prior experiences without starting from scratch each time, and without having to memorize all examples. Case in point, a person can readily identify a soft-bodied ocean creature with a large bulbous head and eight tentacled arms as an octopus without having memorized prior each of the roughly 300 different species of octopuses in existence. This is called generalization and is an attribute that is deficient in machine learning.

The brain is the inspiration for artificial intelligence (AI) machine learning. The layers of nodes in artificial neural networks are a synthetic nod to biological neurons. Deep learning is inferior to the human brain in its ability to generalize concepts. In task-incremental learning for classification, it seems logical to expect an algorithm that was initially trained to learn to classify octopuses and crabs, then squids and starfishes, would be fully capable of differentiating octopuses from starfishes. However, artificial neural networks are lacking in ability from this perspective, and often require costly retraining in order to learn new tasks such as in this example, distinguishing between an octopus and a starfish. Additionally, training neural networks is not only resource-intensive, but also can generate a significant carbon footprint which has a negative impact on the environment.

How to solve the problem of artificial neural network forgetfulness? One approach is using memory data storage where the neural network stores and retrieves data examples for an exact replay. But this memory-intensive machine learning method is costly and time-consuming. Also, there may be industry-specific data privacy considerations that may eliminate this as a possible option altogether.

Another approach is the generate the data for replay. Generative replay system architecture normally consists of two parts, a main neural network model that acts like the cortex, and the generator neural network that acts like the hippocampus.

The first modification the researchers implemented was inspired by neuroanatomy. Drawing upon inspiration from the brain’s architecture where the hippocampus is embedded in the temporal lobe of the cerebral cortex, the researchers consolidated the generative model into the main neural network, rather than as separate model, and equipped it with generative backward or feedback connections. Representations (either hidden or internal) are replayed based on the network’s context-modulated feedback connections.

In deep learning, a variational autoencoder (VAE) is a deep generative model that yields state-of-the-art machine learning results. Variational autoencoders are used for reinforcement learning and generating images. The researchers used a variational autoencoder. A limitation of variational autoencoders is the inability to generate examples of a class intentionally. To overcome this, the team drew inspiration from the human brain’s ability to control what memories are recalled. They replaced the standard normal prior by a Gaussian mixture with a separate mode for each class. Then they enabled gating based on internal context. During the generative backward pass, a different subset of neurons in each layer is inhibited for each task or class learned. During internal replay, internal or hidden representations are replayed.

“Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain,” reported the researchers.

By applying the multiple disciplines of neuroscience, artificial intelligence, neuroanatomy, and data science, the research collaboration was able to improve upon the standard machine learning approach to mimic the human brain replay functionality. This innovation enables artificial neural networks to learn incrementally from experience in a more scalable and efficient manner in the future.

Copyright © 2020 Cami Rosso All rights reserved.

Source Article