In this poster, we propose a neural mechanism for sequence learning– HTM Sequence Memory, where 1) neurons learn to recognize hundreds of patterns; 2) recognition of patterns act as predictions; 3) a network of neurons forms a powerful sequence memory; and 4) sparse representations lead to highly robust recognition.
In this poster, we describe a network model of cortical circuits that learns sensorimotor representations of objects. Extending previous work, the cortical circuit network integrates motor representations and feed-forward sensory information to build predictive models of objects.
We propose that cortical columns learn 3D sensorimotor models of the world by combining sensory inputs with allocentric location. We found that a simulated robot hand can grasp and recognize any object, and that each cortical column can store more objects, and recognize them faster, by using cross-column connections.
This poster explains HTM Sequence Memory, a neural mechanism for sequence learning, which is ubiquitous in the cortex and has the following characteristics: 1) neurons learn to recognize patterns; 2) pattern recognition acts as predictions; 3) a neuron network forms a sequence memory, and 4) sparse presentations lead to robust recognition.
This poster introduces a theory of sequence memory in the neocortex called HTM Sequence Memory. The three characteristics of HTM Sequence Memory are: 1) Neurons learn to recognize hundreds of patters; 2) Pattern recognition acts as predictions; and 3) a network of neurons forms a powerful sequence memory.