Research Papers
Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments
In this article we investigate biologically inspired architectures as solutions to catastrophic interference. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our neural implementation marks the first time a single architecture has achieved competitive results on both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.
AUTHORS:
Abhiram Iyer, Karan Grewal, Akash Velu, Lucas Souza, Jeremy Forest and Subutai Ahmad
PUBLICATION:
Published in Frontiers in Neurorobotics Journal (Peer-reviewed)
Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning
In this paper, we propose that dendritic properties can help neurons learn context-specific patterns and invoke highly sparse context-specific subnetworks. Within a continual learning scenario, these task-specific subnetworks interfere minimally with each other and, as a result, the network remembers previous tasks significantly better than standard ANNs. We then show that by combining dendritic networks with Synaptic Intelligence we can achieve significant resilience to catastrophic forgetting, more than either technique can achieve on its own.
AUTHORS:
Karan Grewal, Jeremy Forest, Ben Cohen and Subutai Ahmad
PUBLICATION:
Preprint
A Thousand Brains: Toward Biologically Constrained AI
This paper reviews the state of artificial intelligence (AI) and the quest to create general AI with human-like cognitive capabilities. This review argues that improvements in current AI using mathematical or logical techniques are unlikely to lead to general AI. Instead, the AI community should incorporate neuroscience discoveries about the neocortex. It further explains the limitations of current AI techniques and focuses on the biologically constrained Thousand Brains Theory describing the neocortex’s computational principles.
AUTHORS:
Kjell J. Hole and Subutai Ahmad
PUBLICATION:
Review Paper
Hippocampal Spatial Mapping As Fast Graph Learning
This paper approaches spatial mapping as a problem of learning graphs of environment parts. We show that hippocampal modules may dynamically create graphs representing spatial arrangements, and this proposed fast-relation-graph-learning algorithm can expand to incorporate many spatial and non-spatial tasks.
AUTHORS:
Marcus Lewis
PUBLICATION:
Research Paper
Grid Cell Path Integration For Movement-Based Visual Object Recognition
Recent proposals suggest that the brain might use similar mechanisms to understand the structure of objects in diverse sensory modalities, including vision. In machine vision, object recognition given a sequence of sensory samples of an image is a challenging problem when the sequence does not follow a consistent, fixed pattern – yet this is something humans do naturally and effortlessly. We explore how grid cell-based path integration in a cortical network can support reliable recognition of objects given an arbitrary sequence of inputs.
AUTHORS:
Niels Leadholm, Marcus Lewis and Subutai Ahmad
PUBLICATION:
Research Paper
Efficient and Flexible Representation of Higher-Dimensional Cognitive Variables with Grid Cells
This paper shows that a set of grid cell modules, each with only 2D responses, can generate unambiguous and high-capacity representations of variables in much higher-dimensional spaces.
AUTHORS:
Mirko Klukas, Marcus Lewis and Ila Fiete
PUBLICATION:
Published in PLOS Computational Biology Journal (Peer-reviewed)