HRL Laboratories Joins DARPA’s Lifelong Learning Machines (L2M) Program to Develop Breakthrough Machine-Learning Architecture that Remembers Mistakes, Improves, and Evolves Over Its Lifetime.
HRL Laboratories, LLC, will help the Defense Advanced Research Project Agency (DARPA) develop what could be a breakthrough in machine-learning architecture for autonomous systems. With funding from the Lifelong Learning Machines (L2M) program, which is led by Dr. Hava Siegelmann from DARPA, the proposed system will continually improve performance and update its knowledge based on experience, without human supervision.
If successful, HRL’s Super Turing Evolving Lifelong Learning ARchitecture (STELLAR) project will achieve significant improvement over current technology, enabling autonomous systems to rapidly adapt to unforeseen situations, remember and learn from each experience, and consolidate new tasks with previous ones. Current machine learning systems forget old tasks when learning new ones and are not capable of learning online and responding to new situations not presented during offline training.
For a new skill to be consolidated in a way that does not interfere with previous learning requires the researchers to mimic how the human brain consolidates old and new learning. Tasks humans learn during the day are replayed in their brains at night. During sleep, learning is consolidated between the areas of the brain called the hippocampus—important in rapid encoding of waking experience—and the neocortex—involved in long-term memory storage, semantic memory extraction, and higher-order brain functions such as sensory perception, decision-making, action selection, cognition, language, and others.
“We intend to mimic the learning consolidation process of the brain using agents augmented with a memory system that is inspired by the interactions between the neocortex and hippocampus,” said Dr. Praveen Pilly, HRL’s Principal Investigator on the STELLAR project. “We will have a machine learning algorithm that is active while awake, such as when driving an autonomous car, optimizing its ability to predict the next few time steps. Later, when the car is parked and the algorithm is essentially sleeping, it will be selectively replaying the recent experiences it has gained and consolidating them with its existing knowledge in a non-destructive manner. This will give the car a heretofore unseen ability to respond to and eventually predict some unplanned events while driving. The system will be able to respond safely to surprises such as a tire blow-out or unforeseen weather conditions without shutting down.”
“Current machine learning algorithms, including deep neural networks, are uniformly plastic and do not include mechanisms for preserving previously learned knowledge. Inspired by the neuromodulatory systems in the adult human brain, we’ll incorporate mechanisms of structural and functional plasticity that are capable of continual learning,” said Dr. Soheil Kolouri, STELLAR’s co-Principal Investigator.

The HRL STELLAR team L to R: Charles Martin, Praveen Pilly, Soheil Kolouri, Aashish Patel, Nicholas Ketz, Mohammad Rostami. © 2018 HRL Laboratories.
“To achieve L2M, we’ll leverage a technique called neuro-evolution, creating an evolutionary methodology in which multiple learning architectures will compete for survival through generations. An element of the system called a fitness score will measure how well each architecture learns tasks in a curriculum, as well as how they adapt to surprise situations safely. The architectures that are best at these tasks will be programmed to move on to the next generation, spawning new general-purpose flexible architectures that can better consolidate new with old learning,” said Dr. Nicholas Ketz, STELLAR’s key team member.
STELLAR will go beyond current Turing systems by allowing adaptation of its program in response to new and unforeseen situations. “The combination of selective plasticity, offline memory optimization, neurogenesis, and neuromodulation allow the creation and selection of heterogeneous plastic neural networks to accelerate online adaptation leading to Super-Turing capability,” said Dr. Pilly.
Source: HRL
Related:
- Artificial Intelligence and the Future of Warfare
- Training artificial intelligence with artificial X-rays
- Eagle-eyed machine learning algorithm outdoes human experts
- Breakthrough in construction of computers for mimicking human brain