The Blind Spot


blind spot

noun

1.     the point of entry of the optic nerve on the retina, insensitive to light.

We all have them yet rarely need to think about them: a blind spot, or scotoma, in scientific language. A blind spot is a gap in our vision, created by our own anatomy. At the point where the optic nerve attaches to the back of the eyeball is a section that cannot detect light. Yet, we don’t walk around perceiving gaps in our vision. We don’t see dark spots everywhere we look. So what’s going on?

It has long been assumed that the brain is simply “filling in the gap.” Scientists have suggested that the brain accounts for this single blind spot by filling it in with images from the parts of the eye that can detect light.[1][2]

Our recent research proposes that perhaps something entirely different is going on here. In our latest manuscript, Why Does the Neocortex Have Layers and Columns, A Theory of Learning the 3D Structure of the World, we introduced a theory for how the brain creates 3D models of objects. This theory offers an explanation for how the brain learns through movement. And it flips the entire idea of a blind spot on its head. Your brain isn’t filling in gaps. It’s simply never looking at something as a whole.

To illustrate this point, I’ll use an example involving touch, but the same concept holds true for all sensory modalities. I want you to imagine a coffee mug. For the purpose of this thought experiment, imagine that your eyes are closed, so you cannot rely on your vision. How would you sense the mug? You might reach your hand out and run your fingers over the rim. You could pick it up by the handle and take a sip out of it. You could hold it by the bottom in the palm of your hand. While there are an unlimited number of ways that you can touch this mug, you will never touch the entire thing at once. Yet, no matter how you touch it, your brain only knows this object in its entirety, as a complete mug.

So what is the brain actually doing here? Think of each finger as a sensor, receiving its own input based on where it is touching the mug. Each finger is invididually deciding, either on its own as it receives additional input by touching different parts, or by conferring with the other fingers, what it is inferring. Each finger is building a complete model of the mug. That’s why when we grab the handle, we are not confused as to where the opening of the cup is, or whether it has a bottom. We don’t need to touch the entire thing to know what it is. We don’t need the brain to “fill in the gaps” for the parts we aren’t touching.

The same holds true with vision. With touch, our fingers act as sensors, but the retina has sensors that work the same way. Imagine if I handed you a straw and asked you to look through it and view a chair in front of you. You wouldn’t be able to perceive the entire chair through the straw, so you would move the straw to scan over the chair. Each of those views through a straw is akin to a finger touching an object. In the same way you have multiple fingers touching an object at the same time, imagine multiple sensors in your eye, each taking in their unique, straw-like view simultaneously. Your brain builds a model of the chair by viewing it in the same way it builds a model of the mug by touching it.

Because your brain builds a model and not an image, the chair exists in your brain as a whole object regardless of what part you are observing. So it doesn’t matter if part of the object is obscured by your blind spot or if you can only see the half of the object that is facing you; your partial observations of the object will invoke your brain’s representation for the object in its entirety. If you’re looking at the top of the chair and the bottom is not within your view, you’re not confused as to whether the chair has legs. You haven’t built a model in your brain of a legless chair. You’ve built a model of the chair as it exists in the world, in its entirety.

The brain is not a single unit looking at a single thing. It comprises multiple sensors, represented by multiple independent columns. Each column is receiving its own sensory and location input and building models of objects. The columns work together to infer an object given the partial information contributed from each column. The brain doesn’t have to fill in for a blind spot because each column has a complete model of the object and the columns collaborate to infer the object from their collective sensations.

As we continue to progress our understanding of how the brain works, we remain excited about the implications. We believe it will have profound and lasting effects on how people think about the brain. But it’s fun to see it unfolding in everyday occurrences like this – offering new explanations for familiar concepts, like blind spots, that we may never think about in the same way.

Footnotes and Citations


[1] Awater, H., Kerlin, JR., Evans, KK. and Tong, F. (2005). Cortical representation of space around the blind spot. Journal of neurophysiology NCBI doi:10.1152/jn.01330.2004. https://www.ncbi.nlm.nih.gov/pubmed/16033933


[2] Pessoa, L., Thompson, E., Noë, A. (1998). Finding out about filling-in: a guide to perceptual completion for visual science and the philosophy of perception. Journal of Behavioral Brain Science NCBI. https://www.ncbi.nlm.nih.gov/pubmed/10191878

Authors
Christy Maver • VP of Marketing
Share