Research & Innovation

How Do We Learn? Stevens Mathematician and Team Investigate

How do we learn? It's a simple, yet vexing, question — and one that remains largely unanswered.

A psychologist might say we accumulate experiences, changing our behavior as we go, prioritizing and acting from memory banks. Mathematicians and neuroscientists take a different tack, focusing on the dynamical systems and rules that govern both the individual and group behavior of the nearly 100 billion neurons within each human brain. For them, the question of how we learn essentially boils down to one of how a dynamical system processes information. Despite tremendous recent progress in the modeling of neuronal dynamics, the actual computational properties of neurons remain largely a mystery.

Now a Stevens-Texas A&M team hopes to shed new light on the rules and actions of neurons and neuronal networks as they rapidly process changing information and learn. The team is determined to focus on developing an understanding of the principles that govern neuronal information processing, rather than on replicating biology of neuronal dynamics in even greater detail.

"It's that aha moment," says Michael Zabarankin, a Stevens mathematician who is one of four principal researchers in a new initiative to investigate and model neuronal networks. "When does the comprehension of a set of basic facts, whatever they are — directions, perceptions, recognitions of threats — magically assemble into knowledge, awareness, understanding and adaptation? How and where does it happen? Finding answers to these questions is no doubt an extraordinarily ambitious goal, but that's what we hope to learn more about."

The team proposes to use information-processing principles to derive neuronal dynamics and the synaptic update 'rules' that govern the strengthening of connections among brain neurons, and to understand the ways in which neuronal networks self-organize and optimize the brain's information processing.

By matching the increasingly intelligent behavior of laboratory mice solving spatial problems (such as running through a maze repeatedly) to observed firing patterns of the mice's grid cells — special neurons in the brain's medial entorhinal cortex (MEC) that are central to the formation of mouse (and human) cognitive maps of spatial environments — the team hopes to link neuronal network optimization to successful real-world 'learning' for the first time.

Such insights, says Zabarankin, could help pave the way for more effective understanding of how humans form and remember mental maps of the environment and could shed more light on the nature of such neurological diseases as Alzheimer's and Parkinson's. As these insights are refined into computational models, they could also result in more intelligent software architectures and may eventually be able to be implemented or replicated in circuitry, a process known as neuromorphic engineering.