I’m pleased to share several news items with you this month.
New paper on sparsity: “How Can We Be So Dense?”
We recently released a new paper on Sparse Distributed Representations (SDRs), titled “How Can We Be So Dense? The Benefits of Using Highly Sparse Representations.” At the end of 2018 I shared that our VP of Research Subutai Ahmad was leading a new effort to explore how the Thousand Brains Theory of Intelligence could positively affect machine learning and artificial intelligence. SDRs, which are a fundamental building block of our research, provided a natural place to start. Most artificial neural networks today rely on dense representations. However, sparse representations, which are used in the brain, have a number of desirable properties, including being inherently robust to noise and interference. The paper walks through our implementation of brain-like SDRs in practical systems as a proof of concept. We implemented a sparse layer that can be dropped into existing deep learning and convolutional networks. We then trained sparse networks with back propagation, validated them with popular datasets (MNIST and the Google Speech Command Dataset), and tested their accuracy with noisy images and sounds. Our results demonstrated that not only are sparse networks competitive to today’s techniques, but they offer far better results with noisy data. To read more, you can download the paper, available on arXiv.
Video: The Thousand Brains Theory presentation at Microsoft Research
The Thousand Brains Theory of Intelligence is rich with novel ideas, and provides a deep research roadmap for building intelligent systems that are inspired by the brain. The paper on sparsity represents just the beginning of applying these concepts to practical machine learning systems. Last month, Subutai and our co-founder Jeff Hawkins gave a presentation at Microsoft Research on the broader implications of our theory for AI and machine learning. In the first half of the talk, titled “The Thousand Brains Theory: a Framework for Understanding the Neocortex and Building Intelligent Machines,” Jeff walks through the Thousand Brains Theory and describes some of the key neuroscience concepts. In the second half, Subutai discusses how the key elements of the theory can be introduced into today’s machine learning techniques to improve their capabilities, robustness, and power requirements. I encourage you to watch the video and download the slides to learn more.
We’re hiring: Machine Learning Engineer / Scientist
In order to explore the potential impact of the Thousand Brains Theory on current AI and machine learning systems, we’re excited to announce that we are hiring. Do you have an excellent understanding of machine learning fundamentals, and an interest in neuroscience-based principles? Or do you know someone who can help apply our neuroscience theories to extending machine learning algorithms? We have a full-time opening for a Machine Learning Engineer or Scientist to join our team. This is a rare opportunity to play a critical role in helping progress machine intelligence by infusing innovative new ideas and technologies into the field. A background in neuroscience is helpful, but not required. Please review the desired qualifications and apply for the position here.
COSYNE annual report
Subutai recently returned from our fifth annual trip to the Computational and Systems Neuroscience Conference (COSYNE), where he was happy to report on the number of sessions that were relevant to our most recent work and may offer opportunities for future collaborations. At the event, which took place in Lisbon, Portugal, he presented a poster titled, “A dendritic mechanism for dynamic routing and control in the thalamus” in collaboration with colleague and professor Carmen Varela. (See Subutai’s blog post from last year for a brief explanation on the thalamus and its importance to neocortical theory). An experimentalist and expert on the thalamus, Professor Varela is currently at Florida Atlantic University, and was a Visiting Scientist at Numenta in 2018 while a post-doc at MIT (click here to see her “Interview with a Neuroscientist” episode from last May). Their poster presents a novel theory and mechanism showing how the thalamus could implement attention and complex transformations. If you’re interested in viewing the poster, you can email us to request a copy.
New grid cell paper in collaboration with MIT
Two of our researchers, Mirko Klukas and Marcus Lewis, released a preprint of a new paper on grid cells this month in collaboration with Ila Fiete of the Department of Brain and Cognitive Sciences at MIT, titled “Flexible Representation and Memory of Higher-Dimensional Cognitive Variables with Grid Cells.” The paper shows that a set of grid cell modules, each with only 2D responses, can generate unambiguous and high-capacity representations of variables in much higher-dimensional spaces.
Numenta makes IMPACT 50 List
Lastly, we were pleased to see that Numenta was named to insideBIGDATA’s Q1 2019 list of the most impactful companies in big data, data science, machine learning, AI and deep learning. You can view the full IMPACT 50 list here.
Thank you for continuing to follow Numenta.