This paper contains an analysis of HTM sequence memory applied to various sequence learning and prediction problems. Written with a machine learning perspective, the paper contains some comparisons to statistical and Deep Learning techniques.
This paper shows that a set of grid cell modules, each with only 2D responses, can generate unambiguous and high-capacity representations of variables in much higher-dimensional spaces.
This paper describes a cortical model for untangling sensorimotor from external sequences. It shows how a single neural mechanism can learn and recognize these two types of sequences: sequences where sensory inputs change due to external factors, and sequences where inputs change due to our own behavior (sensorimotor sequences).
This paper describes a mathematical model for quantifying the benefits and limitations of sparse representations in neurons and cortical networks.
An earlier version of the above submission, this paper applies our mathematical model of sparse representations to practical HTM systems.