Research Papers
Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
In this article we introduce Complementary Sparsity, a novel technique that significantly improves the performance of dual sparse networks on existing hardware. We demonstrate that we can achieve high performance running weight-sparse networks, and we can multiply those speedups by incorporating activation sparsity.
AUTHORS:
Kevin Hunter, Lawrence Spracklen and Subutai Ahmad
PUBLICATION:
Published in Neuromorphic Computing and Engineering Journal (Peer-reviewed)
Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments
In this article we investigate biologically inspired architectures as solutions to catastrophic interference. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our neural implementation marks the first time a single architecture has achieved competitive results on both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.
AUTHORS:
Abhiram Iyer, Karan Grewal, Akash Velu, Lucas Souza, Jeremy Forest and Subutai Ahmad
PUBLICATION:
Published in Frontiers in Neurorobotics Journal (Peer-reviewed)
Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning
In this paper, we propose that dendritic properties can help neurons learn context-specific patterns and invoke highly sparse context-specific subnetworks. Within a continual learning scenario, these task-specific subnetworks interfere minimally with each other and, as a result, the network remembers previous tasks significantly better than standard ANNs. We then show that by combining dendritic networks with Synaptic Intelligence we can achieve significant resilience to catastrophic forgetting, more than either technique can achieve on its own.
AUTHORS:
Karan Grewal, Jeremy Forest, Ben Cohen and Subutai Ahmad
PUBLICATION:
Preprint
A Thousand Brains: Toward Biologically Constrained AI
This paper reviews the state of artificial intelligence (AI) and the quest to create general AI with human-like cognitive capabilities. This review argues that improvements in current AI using mathematical or logical techniques are unlikely to lead to general AI. Instead, the AI community should incorporate neuroscience discoveries about the neocortex. It further explains the limitations of current AI techniques and focuses on the biologically constrained Thousand Brains Theory describing the neocortex’s computational principles.
AUTHORS:
Kjell J. Hole and Subutai Ahmad
PUBLICATION:
Review Paper
Hippocampal Spatial Mapping As Fast Graph Learning
This paper approaches spatial mapping as a problem of learning graphs of environment parts. We show that hippocampal modules may dynamically create graphs representing spatial arrangements, and this proposed fast-relation-graph-learning algorithm can expand to incorporate many spatial and non-spatial tasks.
AUTHORS:
Marcus Lewis
PUBLICATION:
Research Paper
Technology Validation: Sparsity Enables 100x Performance Acceleration in Deep Learning Networks
This paper demonstrates how the application of Numenta’s brain-inspired, sparse algorithms achieves more than 100x speed-up on inference tasks compared to dense networks with no loss of accuracy.
AUTHORS:
Numenta Engineering
PUBLICATION:
Whitepaper