I study how brains and computers are different in the way they compute. These differences arise in large part because of basic physical differences in their hardware. I research brain-like computation using "neural networks," simplifications of the complexities of real brains. Besides offering clues to brain functions as models, neural networks perform some practical applications. For example, I applied a simple neural network model, originally derived to explain how humans form concepts based on experience, to the problem of "understanding" a complex radar environment. More recently, I have been working on a set of models for the intermediate-level organization of the nervous system. Scientists know a great deal about individual neurons. We also know a good deal about behavior and the functioning of very large groups of neurons. But we know almost nothing about how groups of a hundred or a thousand or even a hundred thousand neurons cooperate to compute, or perceive, or think, or behave. One question I am now trying to answer concerns scaling. Under what conditions can similar computational functions be performed by networks of greatly differing size?
I became interested in brains and computers as a graduate student in psychology and neuroscience at M.I.T. I wanted to know how I worked. And, if we learn enough about brains, perhaps eventually we may be able to build machines that work the way we do.