Numenta Newsletter — March 28, 2016
Numenta has an ambitious mission: reverse-engineer the neocortex to understand how it works and apply those principles to software to create intelligent machines. Because neuroscience is the foundation of everything we do, it’s important to have our work published and critiqued in peer-reviewed neuroscience journals. I’m excited to announce that Jeff Hawkins and Subutai Ahmad’s recent manuscript, “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex,”
has been accepted to Frontiers in Neural Circuits Journal, a publication devoted to research in neural circuits, serving the worldwide neuroscience community.
We believe this paper will prove to be a significant contribution to brain theory as well as introduce algorithms that will be important for machine intelligence. The paper proposes a model of cortical neurons that explains why they have thousands of synapses, why the synapses are segregated onto different parts of the dendrites, and how neurons integrate this input in a functionally meaningful way. There are many aspects of this neuron model that are novel. The paper also provides a new and detailed theory of how networks of neurons throughout the neocortex learn sequences and make predictions.
This theory is radically different than the models used in most artificial neural networks such as deep learning and LSTM sequence memory. Deep learning and LSTM use artificial neurons that are simplistic and not biologically accurate. Many deep learning models do not incorporate sequence memory at all.
Yet sequence memory is a critical component of the brain. In fact, we believe that learning and recalling sequences of patterns is the basic operation common to all brain function.
Why is sequence memory so important? Because everything we as humans do is sequence based. When we hear someone talk, we take in a sequence of words; when we look at something, our eyes saccade, moving quickly over the image, leading to a sequence of inputs from the retina; when we move our bodies, we’re outputting a sequence of motor commands, which causes changes in our inputs. It
is this understanding of how sequence memory works in the brain that allows our software to perform continuous, unsupervised learning without any training data.
In case you are not familiar with academic publishing, being published in a journal like Frontiers is not as simple as writing and submitting your paper. It involves a meticulous, multi-step review process where independent reviewers provide feedback and request changes before accepting or denying the paper.
This paper is ambitious in how many new ideas it introduces, and took over a year from when we first decided to write it to now, when it has been accepted.
I invite you to read the paper, and for those that want a less technical overview, I encourage you to read
this article by MIT Tech Review, which summarizes the findings of our pre-published version.
Speaking of papers, we also recently completed a new white paper that addresses one of the most popular topics that we’re asked when people are learning about Numenta’s HTM theory: encoders. The paper, Encoding Data for HTM Systems, written by Scott Purdy, our Director of Engineering, describes how to encode data into Sparse Distributed Representations (SDRs) so that the data can be used in HTM systems. Scott reviews several existing encoders, which are available through our open source project called NuPIC, and explains the requirements for creating encoders for new types of data.
Lastly, for any of our readers that will be attending Strata + Hadoop World 2016 in San Jose, CA,
please stop by and say Hi. We will be at Booth 540, along with representatives from our partners Cortical.io and Grok, giving demos and overviews on the company, our technology and our latest applications.