Can CPUs leverage sparsity? In this blog post, our Director of ML Architecture Lawrence Spracklen compares Numenta’s FPGA sparse model performance with the performance of sparse models running on modern CPUs.
Both the monetary and environmental costs of AI continue to increase precipitously. In this blog post, our Director of ML Architecture Lawrence Spracklen explains Numenta’s neuroscience approach to sparsity in machine learning and how this approach can provide significant performance gains and massive energy savings.
How can we apply the ideas of a Thousand Brains Theory to pedagogy? We talked to Dr. Michael Riendeau and his two students, Ranger Fair and Jacob Shalmi from Eagle Hill School about how the theory can be beneficial to educators and students.
We’ve been getting a lot of questions lately as to the differences between Hinton’s GLOM model and Numenta’s Thousand Brains Theory. In this blog, we will outline the commonalities and main differences of both models at a high level.
In this post, Karan describes the technicalities of why neural networks do not learn continually, briefly discusses how the brain is thought to succeed at learning task after task, and finally highlights some exciting work in the machine learning community that builds on fundamental principles of neural computation to alleviate catastrophic forgetting.