40 research outputs found

    A New Conceptualization of Human Visual Sensory-Memory

    Get PDF
    Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson-Shiffrin "modal model" forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory

    Color and motion: which is the tortoise and which is the hare?

    Get PDF
    AbstractRecent psychophysical studies have been interpreted to indicate that the perception of motion temporally either lags or is synchronous with the perception of color. These results appear to be at odds with neurophysiological data, which show that the average response-onset latency is shorter in the cortical areas responsible for motion (e.g., MT and MST) than for color processing (e.g., V4). The purpose of this study was to compare the perceptual asynchrony between motion and color on two psychophysical tasks. In the color correspondence task, observers indicated the predominant color of an 18°×18° field of colored dots when they moved in a specific direction. On each trial, the dots periodically changed color from red to green and moved cyclically at 15, 30 or 60 deg/s in two directions separated by 180°, 135°, 90° or 45°. In the temporal order judgment task, observers indicated whether a change in color occurred before or after a change in motion, within a single cycle of the moving-dot stimulus. In the color correspondence task, we found that the perceptual asynchrony between color and motion depends on the difference in directions within the motion cycle, but does not depend on the dot velocity. In the temporal order judgment task, the perceptual asynchrony is substantially shorter than for the color correspondence task, and does not depend on the change in motion direction or the dot velocity. These findings suggest that it is inappropriate to interpret previous psychophysical results as evidence that motion perception generally lags color perception. We discuss our data in the context of a “two-stage sustained-transient” functional model for the processing of various perceptual attributes

    Perception of rigidity in three- and four-dimensional spaces

    Get PDF
    Our brain employs mechanisms to adapt to changing visual conditions. In addition to natural changes in our physiology and those in the environment, our brain is also capable of adapting to “unnatural” changes, such as inverted visual-inputs generated by inverting prisms. In this study, we examined the brain’s capability to adapt to hyperspaces. We generated four spatial-dimensional stimuli in virtual reality and tested the ability to distinguish between rigid and non-rigid motion. We found that observers are able to differentiate rigid and non-rigid motion of hypercubes (4D) with a performance comparable to that obtained using cubes (3D). Moreover, observers’ performance improved when they were provided with more immersive 3D experience but remained robust against increasing shape variations. At this juncture, we characterize our findings as “3 1/2 D perception” since, while we show the ability to extract and use 4D information, we do not have yet evidence of a complete phenomenal 4D experience

    Misperceptions in the Trajectories of Objects undergoing Curvilinear Motion

    Get PDF
    Trajectory perception is crucial in scene understanding and action. A variety of trajectory misperceptions have been reported in the literature. In this study, we quantify earlier observations that reported distortions in the perceived shape of bilinear trajectories and in the perceived positions of their deviation. Our results show that bilinear trajectories with deviation angles smaller than 90 deg are perceived smoothed while those with deviation angles larger than 90 degrees are perceived sharpened. The sharpening effect is weaker in magnitude than the smoothing effect. We also found a correlation between the distortion of perceived trajectories and the perceived shift of their deviation point. Finally, using a dual-task paradigm, we found that reducing attentional resources allocated to the moving target causes an increase in the perceived shift of the deviation point of the trajectory. We interpret these results in the context of interactions between motion and position systems

    Unmasking saccadic uncrowding

    Full text link
    Stimuli that are briefly presented around the time of saccades are often perceived with spatiotemporal distortions. These distortions do not always have deleterious effects on the visibility and identification of a stimulus. Recent studies reported that when a stimulus is the target of an intended saccade, it is released from both masking (De Pisapia, Kaunitz, & Melcher, 2010) and crowding (Harrison, Mattingley, & Remington, 2013). Here, we investigated pre-saccadic changes in single and crowded letter recognition performance in the absence (Experiment 1) and the presence (Experiment 2) of backward masks to determine the extent to which saccadic “uncrowding” and “unmasking” mechanisms are similar. Our results show that pre-saccadic improvements in letter recognition performance are mostly due to the presence of masks and/or stimulus transients which occur after the target is presented. More importantly, we did not find any decrease in crowding strength before impending saccades. A simplified version of a dual-channel neural model, originally proposed to explain masking phenomena, with several saccadic add-on mechanisms, could account for our results in Experiment 1. However, this model falls short in explaining how saccades drastically reduced the effect of backward masking (Experiment 2). The addition of a remapping mechanism that alters the relative spatial positions of stimuli was needed to fully account for the improvements observed when backward masks followed the letter stimuli. Taken together, our results (i) are inconsistent with saccadic uncrowding, (ii) strongly support saccadic unmasking, and (iii) suggest that pre-saccadic letter recognition is modulated by multiple perisaccadic mechanisms with different time courses
    corecore