Much of our research focuses around one fundamental observation: that the neocortex is constantly predicting its inputs. We started by addressing the following question: how do networks of neurons make predictions and learn the temporal structure in streams of sensory data?
We first developed a theoretical framework for sequence learning in the cortex that proposes a new understanding of how neurons work, based on the idea that every pyramidal cell is a prediction machine. In , we show how a single pyramidal neuron with active dendrites can recognize hundreds of unique patterns and contexts in which it can predict its input. We then show that a network incorporating such neurons can learn complex sequences in a surprisingly robust and flexible manner. The model learns in a completely unsupervised fashion and, as a continuously learning system, adapts rapidly to changes in its input.
We used the same framework in a subsequent paper  to model the sensorimotor input layer. In that paper, the context for each neuron was a location signal derived from motor signals, rather than the past input. In , we further showed that such a layer of neurons can discriminate the type of context that is most related to the input and thus simultaneously learn both external temporal sequences as well as sensorimotor sequences. Our work helps to explain why neurons need so many synapses and why dendritic spikes are so important. The theory makes a number of testable predictions, which are detailed in our papers.
Sequence learning continues to be an active area of research at Numenta. We are considering extending the model to incorporate specific time intervals between sequence elements (such as the timing of notes in a melody). We would also like to extend the neuron model to explicitly incorporate apical dendrites and feedback.
Although primarily a biological model, our sequence learning framework has practical applications in temporal prediction and anomaly detection. In  we show how our sequence learning algorithm is applied to temporal prediction with streaming data. We compare the algorithm against a number of common machine learning techniques. Our model achieves static performance on par with state of the art techniques, while displaying improved performance on several important properties like continuous online learning, the ability to handle multiple predictions, robustness to sensor noise and fault tolerance, and good performance without the need for task-specific tuning.
Our sequence learning framework is also applicable to anomaly detection for real-time streaming data. Most approaches to anomaly detection process data in batches, whereas streaming applications require processing and learning with each data point. In  we compared our model against a number of anomaly detection techniques on a comprehensive benchmark dataset, where it achieved the top score.
 Hawkins, J. & Ahmad, S. (2016) Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex. Front. Neural Circuits, 10, 1–13.
 Hawkins, J., Ahmad, S., & Cui, Y. (2017) A Theory of How Columns in the Neocortex Enable Learning the Structure of the World. Front. Neural Circuits, 11, 81.
 Ahmad, S. & Hawkins, J. (2017) Untangling Sequences: Behavior vs. External Causes. bioRxiv, 190678.
 Cui, Y., Ahmad, S., & Hawkins, J. (2016) Continuous online sequence learning with an unsupervised neural network model. Neural Computation., 28, 2474–2504.
 Ahmad, S., Lavin, A., Purdy, S., & Agha, Z. (2017) Unsupervised real-time anomaly detection for streaming data. Neurocomputing, 262, 134–147.