6,543 research outputs found

    The role of human ventral visual cortex in motion perception.

    Get PDF
    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral 'form' (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion

    Sparse 3D Point-cloud Map Upsampling and Noise Removal as a vSLAM Post-processing Step: Experimental Evaluation

    Full text link
    The monocular vision-based simultaneous localization and mapping (vSLAM) is one of the most challenging problem in mobile robotics and computer vision. In this work we study the post-processing techniques applied to sparse 3D point-cloud maps, obtained by feature-based vSLAM algorithms. Map post-processing is split into 2 major steps: 1) noise and outlier removal and 2) upsampling. We evaluate different combinations of known algorithms for outlier removing and upsampling on datasets of real indoor and outdoor environments and identify the most promising combination. We further use it to convert a point-cloud map, obtained by the real UAV performing indoor flight to 3D voxel grid (octo-map) potentially suitable for path planning.Comment: 10 pages, 4 figures, camera-ready version of paper for "The 3rd International Conference on Interactive Collaborative Robotics (ICR 2018)

    Visible light communications-based indoor positioning via compressed sensing

    Get PDF
    This paper presents an approach for visible light communication-based indoor positioning using compressed sensing. We consider a large number of light emitting diodes (LEDs) simultaneously transmitting their positional information and a user device equipped with a photo-diode. By casting the LED signal separation problem into an equivalent compressed sensing framework, the user device is able to detect the set of nearby LEDs using sparse signal recovery algorithms. From this set, and using proximity method, position estimation is proposed based on the concept that if signal separation is possible, then overlapping light beam regions lead to decrease in positioning error due to increase in the number of reference points. The proposed method is evaluated in a LED-illuminated large-scale indoor open-plan office space scenario. The positioning accuracy is compared against the positioning error lower bound of the proximity method, for various system parameters.Comment: to appear in IEEE Communication Letter

    Autonomous Navigation and Mapping using Monocular Low-Resolution Grayscale Vision

    Get PDF
    Vision has been a powerful tool for navigation of intelligent and man-made systems ever since the cybernetics revolution in the 1970s. There have been two basic approaches to the navigation of computer controlled systems: The self-contained bottom-up development of sensorimotor abilities, namely perception and mobility, and the top-down approach, namely artificial intelligence, reasoning and knowledge based methods. The three-fold goal of autonomous exploration, mapping and localization of a mobile robot however, needs to be developed within a single framework. An algorithm is proposed to answer the challenges of autonomous corridor navigation and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. When combined, these metrics allow the robot to navigate in both textured and untextured environments. The robot can autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors. Because the algorithm is based entirely upon low-resolution (32 x 24) grayscale images, processing occurs at over 1000 frames per second

    Building an enhanced vocabulary of the robot environment with a ceiling pointing camera

    Get PDF
    Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set
    corecore