The paper introduces a methodology for synthesizing HTMs on the Automata Processor, a “configurable silicon implementation of nondeterministic finite automata, designed for massively parallel pattern matching.” The authors also demonstrate three prediction applications on their model and its potential to achieve 137-446X performance gains over CPUs.
The paper covers a key learning component in Hierarchical Temporal Memory (HTM): the spatial pooler. The spatial pooling algorithm is inspired by how the brain processes information and performs predictions on spatiotemporal data. This paper lays out a mathematical framework for it while introducing methods for using it for classification and dimensionality reduction. It also provides empirical evidence that verifies it can be used for feature learning.
This paper looks at Hierarchical Temporal Memory (HTM) and the characteristics that make it suitable for time-series based anomaly detection: continuous learning, tolerance to noise and generalization. The authors evaluate HTM on real and artificial data sets to show that HTM discovers anomalies in time-series data.
This Master’s thesis presents a scalable flash-based storage processor unit called Flash-HTM (FHTM), which utilizes Hierarchical Temporal Memory learning algorithms. FHTM is designed to scale with increasing model complexity. As validation, the author evaluates a mathematical model of the hardware against the MNIST dataset, yielding a 91.98% classification accuracy.
This paper explores a hardware implementation for the HTM sequence memory (in the paper they use an older term, the “Cortical Learning Algorithm” to refer to the sequence memory). The paper hypothesizes a structure that is not application dependent and performs fully unsupervised continuous learning.