55 research outputs found

    Perception, cognition, and action in hyperspaces: implications on brain plasticity, learning, and cognition

    Get PDF
    We live in a three-dimensional (3D) spatial world; however, our retinas receive a pair of 2D projections of the 3D environment. By using multiple cues, such as disparity, motion parallax, perspective, our brains can construct 3D representations of the world from the 2D projections on our retinas. These 3D representations underlie our 3D perceptions of the world and are mapped into our motor systems to generate accurate sensorimotor behaviors. Three-dimensional perceptual and sensorimotor capabilities emerge during development: the physiology of the growing baby changes hence necessitating an ongoing re-adaptation of the mapping between 3D sensory representations and the motor coordinates. This adaptation continues in adulthood and is quite general to successfully deal with joint-space changes (longer arms due to growth), skull and eye size changes (and still being able of accurate eye movements), etc. A fundamental question is whether our brains are inherently limited to 3D representations of the environment because we are living in a 3D world, or alternatively, our brains may have the inherent capability and plasticity of representing arbitrary dimensions; however, 3D representations emerge from the fact that our development and learning take place in a 3D world. Here, we review research related to inherent capabilities and limitations of brain plasticity in terms of its spatial representations and discuss whether with appropriate training, humans can build perceptual and sensorimotor representations of spatial 4D environments, and how the presence or lack of ability of a solid and direct 4D representation can reveal underlying neural representations of space.Published versio

    Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory

    Get PDF
    YesHuman memory is content addressable—i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual shortterm memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue–report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features

    A New Conceptualization of Human Visual Sensory-Memory

    Get PDF
    Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson-Shiffrin "modal model" forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory

    Color and motion: which is the tortoise and which is the hare?

    Get PDF
    AbstractRecent psychophysical studies have been interpreted to indicate that the perception of motion temporally either lags or is synchronous with the perception of color. These results appear to be at odds with neurophysiological data, which show that the average response-onset latency is shorter in the cortical areas responsible for motion (e.g., MT and MST) than for color processing (e.g., V4). The purpose of this study was to compare the perceptual asynchrony between motion and color on two psychophysical tasks. In the color correspondence task, observers indicated the predominant color of an 18°×18° field of colored dots when they moved in a specific direction. On each trial, the dots periodically changed color from red to green and moved cyclically at 15, 30 or 60 deg/s in two directions separated by 180°, 135°, 90° or 45°. In the temporal order judgment task, observers indicated whether a change in color occurred before or after a change in motion, within a single cycle of the moving-dot stimulus. In the color correspondence task, we found that the perceptual asynchrony between color and motion depends on the difference in directions within the motion cycle, but does not depend on the dot velocity. In the temporal order judgment task, the perceptual asynchrony is substantially shorter than for the color correspondence task, and does not depend on the change in motion direction or the dot velocity. These findings suggest that it is inappropriate to interpret previous psychophysical results as evidence that motion perception generally lags color perception. We discuss our data in the context of a “two-stage sustained-transient” functional model for the processing of various perceptual attributes

    A Model for Non-Retinotopic Processing

    Get PDF
    We formulate a model for object-centered motion processing that explains non-retinotopic motion percepts observed psychophysically

    Perception of rigidity in three- and four-dimensional spaces

    Get PDF
    Our brain employs mechanisms to adapt to changing visual conditions. In addition to natural changes in our physiology and those in the environment, our brain is also capable of adapting to “unnatural” changes, such as inverted visual-inputs generated by inverting prisms. In this study, we examined the brain’s capability to adapt to hyperspaces. We generated four spatial-dimensional stimuli in virtual reality and tested the ability to distinguish between rigid and non-rigid motion. We found that observers are able to differentiate rigid and non-rigid motion of hypercubes (4D) with a performance comparable to that obtained using cubes (3D). Moreover, observers’ performance improved when they were provided with more immersive 3D experience but remained robust against increasing shape variations. At this juncture, we characterize our findings as “3 1/2 D perception” since, while we show the ability to extract and use 4D information, we do not have yet evidence of a complete phenomenal 4D experience

    Misperceptions in the Trajectories of Objects undergoing Curvilinear Motion

    Get PDF
    Trajectory perception is crucial in scene understanding and action. A variety of trajectory misperceptions have been reported in the literature. In this study, we quantify earlier observations that reported distortions in the perceived shape of bilinear trajectories and in the perceived positions of their deviation. Our results show that bilinear trajectories with deviation angles smaller than 90 deg are perceived smoothed while those with deviation angles larger than 90 degrees are perceived sharpened. The sharpening effect is weaker in magnitude than the smoothing effect. We also found a correlation between the distortion of perceived trajectories and the perceived shift of their deviation point. Finally, using a dual-task paradigm, we found that reducing attentional resources allocated to the moving target causes an increase in the perceived shift of the deviation point of the trajectory. We interpret these results in the context of interactions between motion and position systems
    • …
    corecore