Home » Neuroscience Research » Research Publications » Outside Research
In this Master Thesis from KTH Royal Institute of Technology, the author developed a machine learning model with Numenta’s HTM technology with the goal to solve the task of detecting and classifying anomalies in system logs belonging to Ericsson’s component based architecture applications. The results are then compared to an existing classifier, called Linnaeus, which uses classical machine learning methods. The HTM model is able to show promising capabilities of classifying system log sequences with similar results compared with the Linnaeus model.
In this Master’s thesis from the University of Agder, the authors examine using Hierarchical Temporal Memory (HTM) to predict future purchase events of customers in a non-contractual setting. HTM was chosen specifically because of a desire to research algorithms that may be useful in churn analysis by utilizing temporal structure of data in prediction based models. The authors also compare HTM results to other techniques.
The authors present a full-scale Hierarchical Temporal Memory (HTM) architecture for spatial pooler and temporal memory and verify it for two data sets: MNIST and the European number plate font (EUNF), with and without noise. These results suggest that the proposed architecture, using a novel form of synthetic synapses, can serve as a digital core to build HTM in hardware.
Structural plasticity is required by HTM models, but not addressed in many hardware architectures. This paper presents a model of topographic map formation that is implemented on the SpiNNaker neuromorphic platform, running in real time using point neurons, and making use of both synaptic rewiring and spike-timing dependent plasticity (STDP). The authors use our papers as one of the motivations for working on this architecture.
This project created a combination of Numenta’s HTM algorithms and Temporal Difference learning to solve simple tasks defined in a browser environment. The analysis and evaluation of the results show that the agent is capable of learning simple tasks and there is potential for generalization inherent to sparse representations.