Karan Grewal, our researcher, will be discussing how active dendrites can help deep learning networks learn continually at the ELLIS Meetup on May 27, 2021 at 7 AM PDT (4PM GMT+2).
These ELLIS Meetups are organized with the goal of creating a space in which advancements in the field of Machine learning can be discussed among all interested parties in Nijmegen, at the Radboud University, and beyond. Click here for more details.
Dendrites of pyramidal neurons demonstrate a wide range of linear and non-linear active integrative properties. The vast majority of ANNs ignore the structural complexity of biological neurons and use simplified point neurons. We propose that active dendrites can help ANNs learn continually, a property prevalent in biological systems but absent in artificial ones. (Most ANNs today suffer from catastrophic forgetting, i.e. they are unable to learn new information without erasing what they previously learned.) Our model is inspired by two key properties: 1) the biophysics of sustained depolarization following dendritic spikes, and 2) highly sparse representations. In our model, active dendrites act as a gating mechanism where dendritic segments detect contextual patterns and modulate the firing probability of postsynaptic cells. Employing a winner-take-all network at each layer, our ‘active dendrite networks’ select highly sparse subnetworks of neurons. Different subnetworks have minimal overlap with each other. This ensures that the representations for different tasks are almost orthogonal, which in turn minimizes interference in error signals between tasks. As a result, the network won’t forget previous tasks as easily as in standard networks without active dendrites.