The precursors to the machine learning (ML) and deep learning (DL) of today have been in place for more than 400 years. It all started with the quest by humans to model the real world in a way it could be understood.
Ravi Chityala, senior software development engineer and instructor with the UCSC Silicon Valley Extension Database and Data Analytics program, will begin the evening by walking through the history and progress we have made in modeling the real world: from the time we solved differential equations to the current applications of ML and DL.
We will then look to biology for inspiration in creating intelligent systems. We understand the brain today much better than we did 50 years ago. What new lessons can help us move beyond the limitations of Deep Learning?
Matt Taylor, open source community manager with Numenta, will present some key components of intelligence as it is currently understood at Numenta, including continuous learning, sparsity, and distributed semantics.
We’ll talk about the predictive capacity of pyramidal neurons, how one can represent many things in many contexts, and how sparsity and distributed semantics are enforced within some layers of the neocortex. We’ll even explore how to apply these ideas to today’s Deep Networks like CNNs.
Finally we’ll discuss the breakthroughs needed to move AGI forward in the lab and toward practical applications.
This event is co-sponsored by Numenta and UCSC Silicon Valley Extension.