43 research outputs found
Recommended from our members
On Cognitive Neuroscience
Stephen M. Kosslyn is Professor of Psychology at Harvard University and an Associate Psychologist in the Department of Neurology at the Massachusetts General Hospital. He received his B.A. in 1970 from UCLA and his Ph.D. from Stanford University in 1974, both in psychology, and taught at Johns Hopkins, Harvard, and Brandeis Universities before joining the Harvard Faculty as Professor of Psychology in 1983. His work focuses on the nature of visual mental imagery and high-level vision, as well as applications of psychological principles to visual display design. He has published over 125 papers on these topics, co-edited five books, and authored or co-authored five books. His books include Image and Mind (1980), Ghosts in the Mind's Machine (1983), Wet Mind: The New Cognitive Neuroscience (with 0. Koenig, 1992), Elements of Graph Design (1994), and Image and Brain: The Resolution of the Imagery Debate (1994). Dr. Kosslyn has received numerous honors, including the National Academy of Sciences Initiatives in Research Award, is currently on the editorial boards of many professional journals, and has served on several National Research Council committees to advise the government on new technologies.Psycholog
Fear Selectively Modulates Visual Mental Imagery and Visual Perception
Emotions have been shown to modulate low-level visual processing of simple stimuli. In this study, we investigate whether emotions only modulate processing of visual representations created from direct visual inputs or whether they also modulate representations that underlie visual mental images. Our results demonstrate that when participants visualize or look at the global shape of written words (low-spatial-frequency visual information), the prior brief presentation of fearful faces enhances processing, whereas when participants visualize or look at details of written words (high-spatial-frequency visual information), the prior brief presentation of fearful faces impairs processing. This study demonstrates that emotions have similar effects on low-level processing of visual percepts and of internal representations created on the basis of information stored in long-term memory.Psycholog
Recommended from our members
Is Cognitive Neuropsychology Plausible? The Perils of Sitting on a One-Legged Stool
We distinguish between strong and weak cognitive neuropsychology, with the former attempting to provide direct insights into the nature of information processing and the latter having the more modest goal of providing constraints on such theories. We argue that strong cognitive neuropsychology, although possible, is unlikely to succeed and that researchers will fare better by combining behavioral, computational, and neural investigations. Arguments offered by Caramazza (1992) in defense of strong neuropsychology are analyzed, and examples are offered to illustrate the power of alternative points of view.Psycholog
Dissociation Between Visual Attention and Visual Mental Imagery
Visual mental imagery (which involves generating and transforming visual mental representations, i.e., seeing with the mind's eye) and visual attention appear to be distinct processes. However, some researchers have claimed that imagery effects can be explained by appeal to attention (and thus, that imagery is nothing more than a form of attention). In this study, we used a size manipulation to demonstrate that imagery and attention are distinct processes. We reasoned that if participants are asked to perform each function (imagery and attention) using stimuli of two different sizes (large and small), and that stimulus size affects the two functions differently, then we could conclude that imagery and attention are distinct cognitive processes. Our analyses showed that participants performed the imagery task with greater facility at a large size, whereas attention was performed more easily using smaller stimuli. This finding demonstrates that imagery and attention are distinct cognitive processes.Psycholog
Recommended from our members
Why are “What” and “Where” Processed by Separate Cortical Visual Systems? A Computational Investigation
In the primate visual system, the identification of objects and the processing of spatial information are accomplished by different cortical pathways. The computational properties of this “two-systems” design were explored by constructing simplifying connectionist models. The models were designed to simultaneously classify and locate shapes that could appear in multiple positions in a matrix, and the ease of forming representations of the two kinds of information was measured. Some networks were designed so that all hidden nodes projected to all output nodes, whereas others had the hidden nodes split into two groups, with some projecting to the output nodes that registered shape identity and the remainder projecting to the output nodes that registered location. The simulations revealed that splitting processing into separate streams for identifying and locating a shape led to better performance only under some circumstances. Provided that enough computational resources were available in both streams, split networks were able to develop more efficient internal representations, as revealed by detailed analyses of the patterns of weights between connections.Psycholog
Inspecting Visual Mental Images: Can People "See" Implicit Properties as Easily in Imagery and Perception?
Can people "see" previously unnoticed properties in objects that they visualize, or are they locked into the organization of the pattern that was encoded during perception? To answer this question, we first asked a group to describe letters of the alphabet and found that some properties (such as the presence of a diagonal line) are often mentioned, whereas others (such as symmetry) are rarely if ever mentioned. Then we showed not only that other participants could correctly detect both kinds of properties in visualized letters, but also that the relative differences in the ease of detecting these two types of properties are highly similar in perception (when the letters are actually visible) and imagery (when the letters are merely visualized). These findings provide support for the view that images can be reinterpreted in ways much like what occurs during perception and speak to the wider issue of the long-standing debate about the format of mental images.Psycholog
Mental Rotation is Not Easily Cognitively Penetrable
When participants take part in mental imagery experiments, are they using their "tacit knowledge" of perception to mimic what they believe should occur in the corresponding perceptual task? Two experiments were conducted to examine whether such an account can be applied to mental imagery in general. These experiments both examined tasks that required participants to "mentally rotate" stimuli. In Experiment 1, instructions led participants to believe that they could re-orient shapes in one step or avoid re-orienting the shapes altogether. Regardless of instruction type, response times increased linearly with increasing rotation angles. In Experiment 2, participants first observed novel objects rotating at different speeds, and then performed a mental rotation task with those objects. The speed of perceptually demonstrated rotation did not affect the speed of mental rotation. We argue that tacit knowledge cannot explain mental imagery results in general, and that in particular the mental rotation effect reflects the nature of the underlying internal representation and processes that transform it, rather than participants’ pre-existing knowledge.Psycholog
Recommended from our members
Receptive Field Characteristics That Allow Parietal Lobe Neurons to Encode Spatial Properties of Visual Input: A Computational Analysis
A subset of visually sensitive neurons in the parietal lobe apparently can encode the locations of stimuli, whereas visually sensitive neurons in the inferotemporal cortex (area IT) cannot. This finding is puzzling because both sorts of neurons have large receptive fields, and yet location can be encoded in one case, but not in the other. The experiments reported here investigated the hypothesis that a crucial difference between the IT and parietal neurons is the spatial distribution of their response profiles. In particular, IT neurons typically respond maximally when stimuli are presented at the fovea, whereas parietal neurons do not. We found that a parallel-distributed-processing network could map a point in an array to a coordinate representation more easily when a greater proportion of its input units had response peaks off the center of the input array. Furthermore, this result did not depend on potentially implausible assumptions about the regularity of the overlap in receptive fields or the homogeneity of the response profiles of different units. Finally, the internal representations formed within the network had receptive fields resembling those found in area 7a of the parietal lobe.Psycholog
Recommended from our members
Two Forms of Spatial Imagery: Neuroimaging Evidence ​
Spatial imagery may be useful in such tasks as interpreting graphs and solving geometry problems, and even in performing surgery. This study provides evidence that spatial imagery is not a single faculty; rather, visualizing spatial location and mentally transforming location rely on distinct neural networks. Using 3-T functional magnetic resonance imaging, we tested 16 participants (8 male, 8 female) in each of two spatial imagery tasks--one that required visualizing location and one that required mentally rotating stimuli. The same stimuli were used in the two tasks. The location-based task engendered more activation near the occipito-parietal sulcus, medial posterior cingulate, and precuneus, whereas the transformation task engendered more activation in superior portions of the parietal lobe and in the postcentral gyrus. These differences in activation provide evidence that there are at least two different types of spatial imagery.Psycholog