This paper presents an extension of the BrainScaleS accelerated analog neuromorphic hardware model by supporting multicompartment models and non-linear dendrites. Using a 65 nm prototype Application Specific Integrated Circuit (ASIC), the system emulates different spike types observed in cortical pyramidal neurons: NMDA plateau potentials, calcium and sodium spikes. Dendritic NMDA spikes are an important component of HTM theory, and the authors use our paper as a motivation for developing the hardware.
The paper introduces a methodology for synthesizing HTMs on the Automata Processor, a “configurable silicon implementation of nondeterministic finite automata, designed for massively parallel pattern matching.” The authors also demonstrate three prediction applications on their model and its potential to achieve 137-446X performance gains over CPUs.
The paper covers a key learning component in Hierarchical Temporal Memory (HTM): the spatial pooler. The spatial pooling algorithm is inspired by how the brain processes information and performs predictions on spatiotemporal data. This paper lays out a mathematical framework for it while introducing methods for using it for classification and dimensionality reduction. It also provides empirical evidence that verifies it can be used for feature learning.
This paper looks at Hierarchical Temporal Memory (HTM) and the characteristics that make it suitable for time-series based anomaly detection: continuous learning, tolerance to noise and generalization. The authors evaluate HTM on real and artificial data sets to show that HTM discovers anomalies in time-series data.
This Master’s thesis presents a scalable flash-based storage processor unit called Flash-HTM (FHTM), which utilizes Hierarchical Temporal Memory learning algorithms. FHTM is designed to scale with increasing model complexity. As validation, the author evaluates a mathematical model of the hardware against the MNIST dataset, yielding a 91.98% classification accuracy.