On Intelligence

by Jeff Hawkins & Sandra Blakeslee

In On Intelligence, Jeff’s first book, he introduces what a brain theory would look like. Jeff proposes the key idea that the brain learns a model of the world and uses this model to predict the future. Jeff argues that to be intelligent, an AI system has to learn a model of the world.

Please contact press@numenta.com for any inquiries.

Translations Available

Chinese, Finnish, French, German, Hebrew, Indonesian, Italian, Korean, Japanese, Polish, Portuguese , Russian, Spanish , Vietnamese.

About the Authors

Jeff Hawkins

Jeff is an engineer, serial entrepreneur, scientist, inventor and author. His life-long interest in neuroscience and theories of the neocortex has driven his passion for building a technology based on neocortical theory. Previously, he founded two mobile computing companies, Palm and Handspring, and is the architect of many computing products such as the PalmPilot and Treo smartphone. In 2002, he founded the Redwood Neuroscience Institute, a scientific institute focused on understanding how the neocortex processes information. The institute is currently located at U.C. Berkeley. In 2004 Jeff wrote the book, On Intelligence, which outlines Hierarchical Temporal Memory (HTM) and describes progress on understanding the neocortex.

Jeff earned his B.S. in Electrical Engineering from Cornell University in 1979. He was elected to the National Academy of Engineering in 2003.

Sandra Blakeslee

Sandra Blakeslee is a science correspondent for the New York Times who specializes in the neurosciences. “When Jeff Hawkins first called me and described his theory of how the brain works, I was enchanted,” Blakeslee said. “I realized instantly that he had found a Rosetta Stone for explaining countless mysteries of human behavior. Working with him was irresistible and, I’m proud to say, a great honor.”

Blakeslee graduated from the University of California in Berkeley in 1965 where she majored in political science. After serving as a Peace Corps volunteer in Borneo, she returned to the United States and joined the New York Times United Nations bureau as a news assistant. In 1968 she became a staff writer in the science department in New York. In the early 1970s, Blakeslee moved to California and began freelancing while raising two children. After spending several years with her family in Africa and Europe, she returned to California and worked briefly for the Los Angeles Times.

In 1983, Blakeslee rejoined the New York Times as a science correspondent and has continued writing for the paper since then on special contract. She is now based in Santa Fe, New Mexico.

In 1995, Blakeslee and George Johnson, a New York Times colleague who also lives in Santa Fe, began the Santa Fe Science Writing Workshop. “We bring top science writers to town each year to be our faculty and help students learn what the field is all about,” she said. “Everyone goes home charged up. It’s a wonderful experience to mentor new writers.”

Blakeslee is coauthor of several books with Dr. Judith Wallerstein on the effects of divorce on children: Second Chances, The Unexpected Legacy of Divorce and What About the Kids as well as a book on what makes marriage work, The Good Marriage. She also coauthored a book, Phantoms in the Brain, with psychologist and neurologist Dr. Vilay Ramachandran of UCSD.

Reviews & Press

Jeff Hawkins and On Intelligence are featured in Fortune Magazine – written by David Stipp How Do You Think the Brain Works?

“I’ve read dozens of books about the human brain and how it works. On Intelligence … is far away and the best.”Lynn Yarris, Senior Science Writer, The San Jose Mercury News

“On Intelligence will have a big impact; everyone should read it. In the same way that Erwin Schrodinger’s 1943 classic What is Life? made how molecules store genetic information then the big problem for biology, On Intelligence lays out the framework for understanding the brain.”James D. Watson, president, Cold Spring Harbor Laboratory, and Nobel laureate in physiology

“A landmark book. On Intelligence is the first clear exposition of what could be the long-awaited ‘great general theory’ of human brain function. Loaded with intelligence, insight and wisdom, it’s a wonderfully readable account of the fundamental principles of the brain by a great American original.” – Mike Merzenich, professor of neuroscience, University of California, San Francisco

“Brilliant and embued with startling clarity. On Intelligence is the most important book in neuroscience, psychology, and artificial intelligence in a generation.” – Malcolm Young, professor of biology and provost, University of Newcastle

“Read this book. Burn all the others. It is original, inventive, and thoughtful, from one of the world’s foremost thinkers. Jeff Hawkins will change the way the world thinks about intelligence and the prospect of intelligent machines.” – John Doerr, partner, Kleiner Perkins Caufield & Byers

“Jeff Hawkins has written an original, thought-provoking and, with the help of Sandra Blakeslee, remarkably readable book that presents a new theory of the functions of the cerebral cortex in perception, cognition, action and intelligence. What is distinctive about his theory is the original way existing ideas about the cerebral cortex and its architecture have been combined and elaborated based on an extensive knowledge of how the brain works – what Hawkins calls Real Intelligence in contrast to computer-based Artificial Intelligence. As a result, this book is a must-read for everyone who is curious about the brain and wonders how it works. Many sections of this book, especially those on intelligence, creativity and minds of silicon, are so thoughtful and original that they are likely to be required reading for college undergraduates for years to come.” 

Eric R. Kandel, professor, Columbia University, senior investigator, Howard Hughs Medical Institute, and 2000 Nobel Laureate in medicine

“On Intelligence is a brilliantly presented and innovative hypothesis of how the brain works. Hawkins makes a convincing case that human perception is based upon expectations…that our minds predict what we will experience before we experience it, based on our memory of similar circumstances. We then pay attention to the differences…the sensory experiences that are not part of our expectations.

Hawkins also proposes an innovative and credible mechanism for how this anticipatory thinking is implemented by the brain’s circuitry, including a guide to what the experimental proof will look like.

The theory provides insight as to why eyewitnesses to the same event will often differ in their recollection of what they saw, based on their different life experiences. It also gives a logical reason for human prejudice and why people get addicted to gambling.

I believe the book will have a great impact on neuroscience and lays a foundation for improving human communication, mutual understanding, and education.” 

Pat McGovern, Founder and Chairman, International Data Group (IDG), and Chairman of the McGovern Institute for Brain Research at MIT

“The brain is an astonishing machine having intelligence. How does it work? What are the essential differences that distinguish it from computers? These fundamental questions pass through everyone’s mind at least once, and Jeff Hawkins is no exception, but his desire to find the answers to those questions evolved into a lifetime quest. Trained as an electrical engineer and employed by Intel, a very successful computer company, Hawkins was engaged in the development of computer chips, far from the world of brain research, yet his desire to understand the brain was forever present.

In a letter to the President of Intel, Hawkins proposed “a research group to understand how the brain works. It will be a big business one day.” Intel, however, did not agree. Following a visit to the Artificial Intelligence Laboratory at MIT, Hawkins attempted to pursue a PhD to study brain-based intelligent machines. MIT also showed no interest. Frustrated, he landed in Silicon Valley and started two remarkable successful venture companies, Palm Computing and Hand Spring. So successful that he was about to found the Redwood Neuroscience Institute to develop brain-based intelligent machines on his profits.

In On Intelligence, he proposes a theory to explain how intelligence emerges in the brain. Unencumbered by details of the biological and information processing aspects of the brain, Hawkins shows first how the fundamental principles of the brain are different from computers, then how intelligent machines could be developed in the future within the next 10 years. Told by a successful computer architect, the complete story is very convincing.

According to Hawkins, intelligence emerges in the cerebral cortex, a sheet folded into six layers that consists of many regions. A hierarchical cortical structure guides the functional principles of each region in the cortex. What then is the fundamental magic of the brain? Dynamic predictions based on incoming information. These predictions start in the lower layers of hierarchy and are sent to the upper layers of each region. When predictions are correct, the situation is correctly understood. Incorrect predictions require additional global interpretation and prediction at higher layers. Prediction results from higher levels are sent back to lower layers to help the understanding of incoming information. This system is memory-based: the associative memory of recurrent connections enables new predictions based on past information. In short, Hawkins says that emerging intelligence in the brain is based on the ability to make predictions based on associative memory with invariant representations.

Current approaches to AI cannot produce intelligent machines, argues Hawkins, because they try to mimic basic human behaviors that can be imitated without intelligence. Research on neural networks is also criticized because feedforward networks, like perceptrons, cannot generate intelligence. Intelligence lies in the ability to make predictions. This is the top-down approach, which in sharp contrast to the traditional bottom-up approaches that study the details of parts first. Most neuroscientists adopt the bottom-up approach; however, engineers, designers and artists need a top-down approach to complete their work.

As I was the first to propose that associative memory of recurrent neural networks has the ability of recalling pattern sequences, Hawkins’ argument is a delight to read. My paper, which was published in 1972 in IEEE Transactions on Computers, has long been forgotten, even after Hopfield proposed the same idea of associative memory of pattern completion. Here in this work, Hawkins presents a grand design for intelligence machines. As the director of RIKEN Brain Science Institute and a researcher engaged in mathematical neuroscience for many years, I am pleased to read this exciting story of the brain. I enthusiastically look forward to the development brain-based intelligent machines.” 

Shun-ichi Amari, Director of the RIKEN Brain Science Institute, and Laboratory Head of the Mathematical Neuroscience Laboratory, Hirosawa, Wako-shi, Saitama, Japan

“Jeff Hawkins’ On Intelligence is an important book. It lays out a dramatic new unified theory of how the brain works, in a way that even lay readers can easily understand. And it predicts an exciting technology future filled with truly intelligent machines that go far beyond today’s computers and crude robots.” – Walt S. Mossberg, Personal Technology Columnist, The Wall Street Journal

“A remarkable synthesis of ideas. I predict it will have a major impact on neuroscience and neuroscientists.” – Mriganka Sur, Sherman Fairchild Professor of Neuroscience, Department Head Department of Brain and Cognitive Sciences at MIT

“Not since McCulloch and Pitts has there been a brain model so interesting to computer science as Hawkins’ memory-prediction model. This is a brain model computer scientists can sink their “algorithmic teeth” in. Computer scientists can both help refine the memory-prediction model by formalizing it and they can explore various aspects of its behavior by simulation. This is something we can and should start now. It provides a basis for partnering with neuroscience. The book also takes a fresh look at several often-discussed but poorly understood topics. I found the brief comments on consciousness more interesting than Daniel Dennett’s 1991 book, Consciousness Explained, and the discussion of awareness and the thought-experiment of erasing memory rival the insights from Damasio’s 1999 book, The Feeling of What Happens. The memory-prediction model is more compelling than Roger Penrose’s insistence on the need for quantum mechanical explanations of consciousness, and offers a more compelling thesis about language than Chomsky’s famous claim that there is a “language organ” in the brain. I recommend that all computer scientists and computer engineers read this fascinating book.”

Robert L. Constable, Dean of the Faculty of Computing and Information Science, Cornell University