In this poster, we propose a neural mechanism for determining allocentric locations of sensed features. We show how cortical columns can use multiple independent moving sensors to identify and locate objects. We lay out a model inspired by grid cell modules that describes how the brain computes and represents locations.
In this poster, we propose a neural mechanism for sequence learning– HTM Sequence Memory, where 1) neurons learn to recognize hundreds of patterns; 2) recognition of patterns act as predictions; 3) a network of neurons forms a powerful sequence memory; and 4) sparse representations lead to highly robust recognition.
In this poster, we describe a network model of cortical circuits that learns sensorimotor representations of objects. Extending previous work, the cortical circuit network integrates motor representations and feed-forward sensory information to build predictive models of objects.
We propose that cortical columns learn 3D sensorimotor models of the world by combining sensory inputs with allocentric location. We found that a simulated robot hand can grasp and recognize any object, and that each cortical column can store more objects, and recognize them faster, by using cross-column connections.
This poster explains HTM Sequence Memory, a neural mechanism for sequence learning, which is ubiquitous in the cortex and has the following characteristics: 1) neurons learn to recognize patterns; 2) pattern recognition acts as predictions; 3) a neuron network forms a sequence memory, and 4) sparse presentations lead to robust recognition.