125,127 research outputs found

    Spatial integration of optic flow information in direction of heading judgments

    No full text
    While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world

    The evolutionary neuroscience of tool making

    Get PDF
    The appearance of the first intentionally modified stone tools over 2.5 million years ago marked a watershed in human evolutionary history, expanding the human adaptive niche and initiating a trend of technological elaboration that continues to the present day. However, the cognitive foundations of this behavioral revolution remain controversial, as do its implications for the nature and evolution of modern human technological abilities. Here we shed new light on the neural and evolutionary foundations of human tool making skill by presenting functional brain imaging data from six inexperienced subjects learning to make stone tools of the kind found in the earliest archaeological record. Functional imaging of this complex, naturalistic task was accomplished through positron emission tomography with the slowly decaying radiological tracer (18)flouro-2-deoxyglucose. Results show that simple stone tool making is supported by a mosaic of primitive and derived parietofrontal perceptual-motor systems, including recently identified human specializations for representation of the central visual field and perception of three-dimensional form from motion. In the naive tool makers reported here, no activation was observed in prefrontal executive cortices associated with strategic action planning or in inferior parietal cortex thought to play a role in the representation of everyday tool use skills. We conclude that uniquely human capacities for sensorimotor adaptation and affordance perception, rather than abstract conceptualization and planning, were central factors in the initial stages of human technological evolution. The appearance of the first intentionally modified stone tools over 2.5 million years ago marked a watershed in human evolutionary history, expanding the human adaptive niche and initiating a trend of technological elaboration that continues to the present day. However, the cognitive foundations of this behavioral revolution remain controversial, as do its implications for the nature and evolution of modern human technological abilities. Here we shed new light on the neural and evolutionary foundations of human tool making skill by presenting functional brain imaging data from six inexperienced subjects learning to make stone tools of the kind found in the earliest archaeological record. Functional imaging of this complex, naturalistic task was accomplished through positron emission tomography with the slowly decaying radiological tracer (18)flouro-2-deoxyglucose. Results show that simple stone tool making is supported by a mosaic of primitive and derived parietofrontal perceptual-motor systems, including recently identified human specializations for representation of the central visual field and perception of three-dimensional form from motion. In the naive tool makers reported here, no activation was observed in prefrontal executive cortices associated with strategic action planning or in inferior parietal cortex thought to play a role in the representation of everyday tool use skills. We conclude that uniquely human capacities for sensorimotor adaptation and affordance perception, rather than abstract conceptualization and planning, were central factors in the initial stages of human technological evolution

    The human egomotion network.

    Get PDF
    All volitional movement in a three-dimensional space requires multisensory integration, in particular of visual and vestibular signals. Where and how the human brain processes and integrates self-motion signals remains enigmatic. Here, we applied visual and vestibular self-motion stimulation using fast and precise whole-brain neuroimaging to delineate and characterize the entire cortical and subcortical egomotion network in a substantial cohort (n=131). Our results identify a core egomotion network consisting of areas in the cingulate sulcus (CSv, PcM/pCi), the cerebellum (uvula), and the temporo-parietal cortex including area VPS and an unnamed region in the supramarginal gyrus. Based on its cerebral connectivity pattern and anatomical localization, we propose that this region represents the human homologue of macaque area 7a. Whole-brain connectivity and gradient analyses imply an essential role of the connections between the cingulate sulcus and the cerebellar uvula in egomotion perception. This could be via feedback loops involved updating visuo-spatial and vestibular information. The unique functional connectivity patterns of PcM/pCi hint at central role in multisensory integration essential for the perception of self-referential spatial awareness. All cortical egomotion hubs showed modular functional connectivity with other visual, vestibular, somatosensory and higher order motor areas, underlining their mutual function in general sensorimotor integration

    3D Motion Analysis via Energy Minimization

    Get PDF
    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to computing the apparent image motion vector field. Furthermore this results currently in the most accurate motion estimation techniques in literature. Much as this is an engineering approach of fine-tuning precision to the last detail, it helps to get a better insight into the problem of motion estimation. This profoundly contributes to state-of-the-art research in motion analysis, in particular facilitating the use of motion estimation in a wide range of applications. In Chapter 5, scene flow is rethought. Scene flow stands for the three-dimensional motion vector field for every image pixel, computed from a stereo image sequence. Again, decoupling of the commonly coupled approach of estimating three-dimensional position and three dimensional motion yields an approach to scene ow estimation with more accurate results and a considerably lower computational load. It results in a dense scene flow field and enables additional applications based on the dense three-dimensional motion vector field, which are to be investigated in the future. One such application is the segmentation of moving objects in an image sequence. Detecting moving objects within the scene is one of the most important features to extract in image sequences from a dynamic environment. This is presented in Chapter 6. Scene flow and the segmentation of independently moving objects are only first steps towards machine visual kinesthesia. Throughout this work, I present possible future work to improve the estimation of optical flow and scene flow. Chapter 7 additionally presents an outlook on future research for driver assistance applications. But there is much more to the full understanding of the three-dimensional dynamic scene. This work is meant to inspire the reader to think outside the box and contribute to the vision of building perceiving machines.</em

    From local constraints to global binocular motion perception

    Get PDF
    Humans and many other predators have two eyes that are set a short distance apart so that an extensive region of the world is seen simultaneously by both eyes from slightly different points of view. Although the images of the world are essentially two-dimensional, we vividly see the world as three-dimensional. This is true for static as well as dynamic images. We discuss local constraints for the perception of three-dimensional binocular motion in a geometric-probabilistic framework. It is shown that Bayesian models of binocular 3D motion can explain perceptual bias under uncertainty and predict perceived velocity under ambiguity. The models exploit biologically plausible constraints of local motion and disparity processing in a binocular viewing geometry. Results from psychophysical experiments and an fMRI study support the idea that local constraints of motion and disparity processing are combined late in the visual processing hierarchy to establish perceived 3D motion direction. The methods and results reported here are likely to stimulate computational, psychophysical, and neuroscientific research because they address the fundamental issue of how 3D motion is represented in the human visual system

    A First- and Second-Order Motion Energy Analysis of Peripheral Motion Illusions Leads to Further Evidence of “Feature Blur” in Peripheral Vision

    Get PDF
    Anatomical and physiological differences between the central and peripheral visual systems are well documented. Recent findings have suggested that vision in the periphery is not just a scaled version of foveal vision, but rather is relatively poor at representing spatial and temporal phase and other visual features. Shapiro, Lu, Huang, Knight, and Ennis (2010) have recently examined a motion stimulus (the “curveball illusion”) in which the shift from foveal to peripheral viewing results in a dramatic spatial/temporal discontinuity. Here, we apply a similar analysis to a range of other spatial/temporal configurations that create perceptual conflict between foveal and peripheral vision.To elucidate how the differences between foveal and peripheral vision affect super-threshold vision, we created a series of complex visual displays that contain opposing sources of motion information. The displays (referred to as the peripheral escalator illusion, peripheral acceleration and deceleration illusions, rotating reversals illusion, and disappearing squares illusion) create dramatically different perceptions when viewed foveally versus peripherally. We compute the first-order and second-order directional motion energy available in the displays using a three-dimensional Fourier analysis in the (x, y, t) space. The peripheral escalator, acceleration and deceleration illusions and rotating reversals illusion all show a similar trend: in the fovea, the first-order motion energy and second-order motion energy can be perceptually separated from each other; in the periphery, the perception seems to correspond to a combination of the multiple sources of motion information. The disappearing squares illusion shows that the ability to assemble the features of Kanisza squares becomes slower in the periphery.The results lead us to hypothesize “feature blur” in the periphery (i.e., the peripheral visual system combines features that the foveal visual system can separate). Feature blur is of general importance because humans are frequently bringing the information in the periphery to the fovea and vice versa

    Motion sequence analysis in the presence of figural cues

    Full text link
    Published in final edited form as: Neurocomputing. 2015 January 5, 147: 485–491The perception of 3-D structure in dynamic sequences is believed to be subserved primarily through the use of motion cues. However, real-world sequences contain many figural shape cues besides the dynamic ones. We hypothesize that if figural cues are perceptually significant during sequence analysis, then inconsistencies in these cues over time would lead to percepts of non-rigidity in sequences showing physically rigid objects in motion. We develop an experimental paradigm to test this hypothesis and present results with two patients with impairments in motion perception due to focal neurological damage, as well as two control subjects. Consistent with our hypothesis, the data suggest that figural cues strongly influence the perception of structure in motion sequences, even to the extent of inducing non-rigid percepts in sequences where motion information alone would yield rigid structures. Beyond helping to probe the issue of shape perception, our experimental paradigm might also serve as a possible perceptual assessment tool in a clinical setting.The authors wish to thank all observers who participated in the experiments reported here. This research and the preparation of this manuscript was supported by the National Institutes of Health RO1 NS064100 grant to LMV. (RO1 NS064100 - National Institutes of Health)Accepted manuscrip

    Occlusion-related lateral connections stabilize kinetic depth stimuli through perceptual coupling

    Get PDF
    Local sensory information is often ambiguous forcing the brain to integrate spatiotemporally separated information for stable conscious perception. Lateral connections between clusters of similarly tuned neurons in the visual cortex are a potential neural substrate for the coupling of spatially separated visual information. Ecological optics suggests that perceptual coupling of visual information is particularly beneficial in occlusion situations. Here we present a novel neural network model and a series of human psychophysical experiments that can together explain the perceptual coupling of kinetic depth stimuli with activity-driven lateral information sharing in the far depth plane. Our most striking finding is the perceptual coupling of an ambiguous kinetic depth cylinder with a coaxially presented and disparity defined cylinder backside, while a similar frontside fails to evoke coupling. Altogether, our findings are consistent with the idea that clusters of similarly tuned far depth neurons share spatially separated motion information in order to resolve local perceptual ambiguities. The classification of far depth in the facilitation mechanism results from a combination of absolute and relative depth that suggests a functional role of these lateral connections in the perception of partially occluded objects
    • …
    corecore