Machine Intelligence Research

Our neuroscience research has uncovered a number of core principles that are not reflected in today’s machine learning systems. These principles have the promise to solve many of the known problems today’s machine learning and AI systems face. By incorporating these principles, we can overcome today’s limitations and build tomorrow’s intelligent machines. Below are some of the topics we are currently researching:

Current Research Projects:

Robustness

Our neuroscience research has shown that sparse representations are more robust and stable than dense representations. We have developed a cortically inspired sparse algorithm that can be applied to deep learning networks trained through backpropagation. These networks have both sparse activations and sparse connections. Sparse networks achieve accuracy competitive with the state of the art dense models, but are significantly more robust to noise.

Dynamic sparse networks

In the brain, cortical networks are sparsely connected and extremely dynamic. As many as 30% of the connections in the neocortex turn over every few days. We are investigating ways to create highly sparse networks that learn their structure dynamically through training.

Performance improvements in sparse networks

Representations in the brain are highly sparse, resulting in an extremely efficient system. For machine learning, sparsity also offers the promise of significant computational benefits, but most hardware architectures are not optimized for extreme sparsity. These limitations have hindered research into sparse models. Along with our hardware partners, we are developing methods for dramatically improving the computational efficiency of sparse neural networks.

Continuous learning

Truly intelligent machines must have the capability to learn and adapt continuously, a property that is absent in today’s deep learning systems. In the brain, sparse representations plus a more complex neuron model, enables us to continuously learn new patterns in an unsupervised manner. We have shown that incorporating these ideas into artificial neural systems can enable systems that learn continuously from streaming data without any manual interventions.

Future Research Projects:

Long term, our focus is on creating truly intelligent machines that understand the world.  Future research projects may include:

  • Learning with much smaller training sets
  • Improving generalization
  • Building integrated sensorimotor systems that can plan, act and learn

Resources

Papers

Ahmad, S., Scheinkman, L., (2019) How Can We Be So Dense? The Benefits of Using Highly Sparse Representations

Cui, S. Ahmad, & J. Hawkins, (2016), Continuous Online Sequence Learning with an Unsupervised Neural Network Model

Ahmad, & J. Hawkins, (2016),. How Do Neurons Operate on Sparse Distributed Representations? A Mathematical Theory of Sparsity, Neurons and Active Dendrites

Cui, S. Ahmad & J. Hawkins (2017) The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding

Videos

The Thousand Brains Theory: A Framework for Understanding the Neocortex and Building Intelligent Machines

Sparsity In The Neocortex

Play Video

Video: Applying The Thousand Brains Theory to Machine Intelligence