11,435 research outputs found

    Brain activation during dichoptic presentation of optic flow stimuli

    Get PDF
    The processing of optic flow fields in motion-sensitive areas in human visual cortex was studied with BOLD (blood oxygen level dependent) contrast in functional magnetic resonance imaging (fMRI). Subjects binocularly viewed optic flow fields in plane (monoptic) or in stereo depth (dichoptic) with various degrees of disparity and increasing radial speed. By varying the directional properties of the stimuli (expansion, spiral motion, random), we explored whether the BOLD effect reflected neuronal responses to these different forms of optic flow. The results suggest that BOLD contrast as assessed by fMRI methods reflects the neural processing of optic flow information in motion-sensitive cortical areas. Furthermore, small but replicable disparity-selective responses were found in parts of Brodmann's area 19

    Generalized Dynamic Object Removal for Dense Stereo Vision Based Scene Mapping using Synthesised Optical Flow

    Get PDF
    Mapping an ever changing urban environment is a challenging task as we are generally interested in mapping the static scene and not the dynamic objects, such as cars and people. We propose a novel approach to the problem of dynamic object removal within stereo based scene mapping that is both independent of the underlying stereo approach in use and applicable to varying object and camera motion. By leveraging stereo odometry, to recover camera motion in scene space, and stereo disparity, to recover synthesised optic flow over the same pixel space, we isolate regions of inconsistency in depth and image intensity. This allows us to illustrate robust dynamic object removal within the stereo mapping sequence. We show results covering objects with a range of motion dynamics and sizes of those typically observed in an urban environment

    Bioinspired engineering of exploration systems for NASA and DoD

    Get PDF
    A new approach called bioinspired engineering of exploration systems (BEES) and its value for solving pressing NASA and DoD needs are described. Insects (for example honeybees and dragonflies) cope remarkably well with their world, despite possessing a brain containing less than 0.01% as many neurons as the human brain. Although most insects have immobile eyes with fixed focus optics and lack stereo vision, they use a number of ingenious, computationally simple strategies for perceiving their world in three dimensions and navigating successfully within it. We are distilling selected insect-inspired strategies to obtain novel solutions for navigation, hazard avoidance, altitude hold, stable flight, terrain following, and gentle deployment of payload. Such functionality provides potential solutions for future autonomous robotic space and planetary explorers. A BEES approach to developing lightweight low-power autonomous flight systems should be useful for flight control of such biomorphic flyers for both NASA and DoD needs. Recent biological studies of mammalian retinas confirm that representations of multiple features of the visual world are systematically parsed and processed in parallel. Features are mapped to a stack of cellular strata within the retina. Each of these representations can be efficiently modeled in semiconductor cellular nonlinear network (CNN) chips. We describe recent breakthroughs in exploring the feasibility of the unique blending of insect strategies of navigation with mammalian visual search, pattern recognition, and image understanding into hybrid biomorphic flyers for future planetary and terrestrial applications. We describe a few future mission scenarios for Mars exploration, uniquely enabled by these newly developed biomorphic flyers

    The association between retinal vein ophthalmodynamometric force change and optic disc excavation

    Get PDF
    Aim: Retinal vein ophthalmodynamometric force (ODF) is predictive of future optic disc excavation in glaucoma, but it is not known if variation in ODF affects prognosis. We aimed to assess whether a change in ODF provides additional prognostic information. Methods: 135 eyes of 75 patients with glaucoma or being glaucoma suspects had intraocular pressure (IOP), visual fields, stereo optic disc photography and ODF measured on an initial visit and a subsequent visit at mean 82 (SD 7.3) months later. Corneal thickness and blood pressure were recorded on the latter visit. When venous pulsation was spontaneous, the ODF was recorded as 0 g. Change in ODF was calculated. Flicker stereochronoscopy was used to determine the occurrence of optic disc excavation, which was modelled against the measured variables using multiple mixed effects logistic regression. Results: Change in ODF (p=0.046) was associated with increased excavation. Average IOP (p=0.66) and other variables were not associated. Odds ratio for increased optic disc excavation of 1.045 per gram ODF change (95% CI 1.001 to 1.090) was calculated. Conclusion: Change in retinal vein ODF may provide additional information to assist with glaucoma prognostication and implies a significant relationship between venous change and glaucoma patho-physiology

    View-based approaches to spatial representation in human vision

    Get PDF
    In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision

    General Dynamic Scene Reconstruction from Multiple View Video

    Get PDF
    This paper introduces a general approach to dynamic scene reconstruction from multiple moving cameras without prior knowledge or limiting constraints on the scene structure, appearance, or illumination. Existing techniques for dynamic scene reconstruction from multiple wide-baseline camera views primarily focus on accurate reconstruction in controlled environments, where the cameras are fixed and calibrated and background is known. These approaches are not robust for general dynamic scenes captured with sparse moving cameras. Previous approaches for outdoor dynamic scene reconstruction assume prior knowledge of the static background appearance and structure. The primary contributions of this paper are twofold: an automatic method for initial coarse dynamic scene segmentation and reconstruction without prior knowledge of background appearance or structure; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes from multiple wide-baseline static or moving cameras. Evaluation is performed on a variety of indoor and outdoor scenes with cluttered backgrounds and multiple dynamic non-rigid objects such as people. Comparison with state-of-the-art approaches demonstrates improved accuracy in both multiple view segmentation and dense reconstruction. The proposed approach also eliminates the requirement for prior knowledge of scene structure and appearance

    For efficient navigational search, humans require full physical movement but not a rich visual scene

    Get PDF
    During navigation, humans combine visual information from their surroundings with body-based information from the translational and rotational components of movement. Theories of navigation focus on the role of visual and rotational body-based information, even though experimental evidence shows they are not sufficient for complex spatial tasks. To investigate the contribution of all three sources of information, we asked participants to search a computer generated “virtual” room for targets. Participants were provided with either only visual information, or visual supplemented with body-based information for all movement (walk group) or rotational movement (rotate group). The walk group performed the task with near-perfect efficiency, irrespective of whether a rich or impoverished visual scene was provided. The visual-only and rotate groups were significantly less efficient, and frequently searched parts of the room at least twice. This suggests full physical movement plays a critical role in navigational search, but only moderate visual detail is required

    Depth Image Processing for Obstacle Avoidance of an Autonomous VTOL UAV

    Get PDF
    We describe a new approach for stereo-based obstacle avoidance. This method analyzes the images of a stereo camera in realtime and searches for a safe target point that can be reached without collision. The obstacle avoidance system is used by our unmanned helicopter ARTIS (Autonomous Rotorcraft Testbed for Intelligent Systems) and its simulation environment. It is optimized for this UAV, but not limited to aircraft systems
    • 

    corecore