The lab also uses computational modeling to develop and refine theories of the architectural mechanisms that underlie cognition.
One project involved a computational study of how the brain represents mental concepts. Representations in sensory cortices are organized topographically: auditory cortex is organized tonotopically, somatosensory cortex is organized somatotopically, and visual cortex is organized retinotopically. Substantial progress has been made in understanding how topography develops at a neurocomputational level, particularly in the early and middle stages of processing in the visual system. We extended this work to investigate how higher-level semantic representations could develop based on topographic input from sensory maps in the ventral visual pathway.
We constructed a hierarchical model in which the receptive fields of cells in downstream layers corresponded to the loci of activity within topographically organized earlier layers. Using this model, we found that meaningful semantic representations at increasing levels of abstraction naturally emerged as a result of exposure to a set of visual stimuli. For example, when presented with a set of simple visual features (color, texture, size, and shape) the model developed semantic representations that distinguish basic level categories (e.g., dogs, tables, cars), superordinate categories (e.g., animals, furniture, vehicles), and living versus nonliving things (Newman & Polk, 2007 ):
This work therefore offers a computationally explicit hypothesis about how semantic representations could emerge in the brain. We’ve also shown how similar self-organizing neural networks can detect patterns of functional connectivity in fMRI data (Peltier et al., 2003).
We’ve also investigated neural network architectures that are massively recurrently connected, that use a correlation-based learning rule, and that use distributed representations. These principles all have substantial empirical support in the neuroscience literature, and although they obviously abstract away from most of the details of the operation of individual neurons, it turns out that they are sufficient to give rise to some very interesting emergent computational properties. Specifically, such networks act as an associative memory: If you repeatedly expose the network to a specific set of inputs, then it will remember those inputs. There’s a sense in which these networks are “attracted” to the stored inputs and, indeed, they are often called attractor nets and the stored inputs are called attractors. Given that most of neocortex satisfies the basic assumptions underlying attractor nets, there is good reason to believe that their emergent properties may be relevant in explaining cognitive phenomena in a wide range of domains.
Consistent with the intuition, we’ve found that the neural principles underlying attractor nets are able to provide insight into some basic questions about verbal working memory. For example, how is information actually maintained? Why does it decay over time? Why do similar items interfere with each other? And what are the neural computations that give rise to the phenomena? We studied an attractor-based model that suggests answers to these questions. In particular, attractor nets naturally maintain information via reverberatory activity, information decays if the reverberatory activity is insufficient to maintain itself, and similar patterns do indeed interfere with each other (Jones & Polk, 2002 ):
Attractor nets may also provide a way to link higher-level symbolic phenomena with lower-level subsymbolic phenomena. For example, we’ve shown how these models could explain and predict asymmetries in similarity judgments (Polk et al., 2002 ). We’ve also come up with a mapping from symbolic rules onto subsymbolic attractor nets (Simen & Polk, 2010 ). This work offers the hope of shedding light on one of the central questions about cognition: How could a subsymbolic, parallel system like the brain give rise to the symbolic, sequential behavior that is characteristic of thought. We exploited this idea to develop a neural network model of executive control in higher cognition (Polk et al., 2002 ).