Episode 13: Subutai Ahmad on Applying HTM Ideas to Deep Learning

Numenta • Matt Taylor and Subutai Ahmad

In this episode, host Matt Taylor chats with VP of Research Subutai Ahmad about Numenta’s current research into deep learning. They discuss how neuroscience can help address the inherent problems in deep learning, the different types of sparsity, and the future research plans at Numenta.

Show Notes

  • 0:51 Why is Numenta looking at deep learning?
  • 2:43 What are the inherent problems in deep learning and how can neuroscience help?
  • 3:06 Continuous learning
  • 3:48 Catastrophic forgetting  – “If you have really sparse representations and a better neuron model and a predictive learning system you can learn continuously without forgetting the old stuff.”
  • 5:11 What does high dimensionality mean in deep learning and neural networks?
  • 6:34 Why does sparsity help?
  • 11:23 Other types of sparsity: dendrites are tiny, sparse computing devices,
  • 14:47 Another type of sparsity: Neurons are independent sparse computing devices.
  • 15:34 Numenta’s paper on sparsity: How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
  • 19:34 The surprising benefit of sparse activations AND sparse weights, a rarity in the machine learning community.
  • 20:52 Benchmarks that we’re working with: MNIST, CIFAR10, VGG19
  • 24:22 What does the future of research look like at Numenta?

Video

Download the full transcript of the podcast here.

 

Subscribe to Numenta On Intelligence: iTunes, Stitcher, Google Play, Spotify, RSS

Numenta • Matt Taylor and Subutai Ahmad