7,776 research outputs found

    Lifting GIS Maps into Strong Geometric Context for Scene Understanding

    Full text link
    Contextual information can have a substantial impact on the performance of visual tasks such as semantic segmentation, object detection, and geometric estimation. Data stored in Geographic Information Systems (GIS) offers a rich source of contextual information that has been largely untapped by computer vision. We propose to leverage such information for scene understanding by combining GIS resources with large sets of unorganized photographs using Structure from Motion (SfM) techniques. We present a pipeline to quickly generate strong 3D geometric priors from 2D GIS data using SfM models aligned with minimal user input. Given an image resectioned against this model, we generate robust predictions of depth, surface normals, and semantic labels. We show that the precision of the predicted geometry is substantially more accurate other single-image depth estimation methods. We then demonstrate the utility of these contextual constraints for re-scoring pedestrian detections, and use these GIS contextual features alongside object detection score maps to improve a CRF-based semantic segmentation framework, boosting accuracy over baseline models

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Damage localization map using electromechanical impedance spectrums and inverse distance weighting interpolation: Experimental validation on thin composite structures

    Get PDF
    Piezoelectric sensors are widely used for structure health monitoring technique. In particular, electromechanical impedance techniques give simple and low-cost solutions for detecting damage in composite structures. The purpose of the method proposed in this article is to generate a damage localization map based on both indicators computed from electromechanical impedance spectrums and inverse distance weighting interpolation. The weights for the interpolation have a physical sense and are computed according to an exponential law of the measured attenuation of acoustic waves. One of the main advantages of the method, so-called data-driven method, is that only experimental data are used as inputs for our algorithm. It does not rely on any model. The proposed method has been validated on both one-dimensional and two-dimensional composite structures

    Localization, Mapping and SLAM in Marine and Underwater Environments

    Get PDF
    The use of robots in marine and underwater applications is growing rapidly. These applications share the common requirement of modeling the environment and estimating the robots’ pose. Although there are several mapping, SLAM, target detection and localization methods, marine and underwater environments have several challenging characteristics, such as poor visibility, water currents, communication issues, sonar inaccuracies or unstructured environments, that have to be considered. The purpose of this Special Issue is to present the current research trends in the topics of underwater localization, mapping, SLAM, and target detection and localization. To this end, we have collected seven articles from leading researchers in the field, and present the different approaches and methods currently being investigated to improve the performance of underwater robots

    Virtual Sound Localization by Blind People

    Full text link
    [EN] The paper demonstrates that blind people localize sounds more accurately than sighted people by using monaural and/or binaural cues. In the experiment, blind people participated in two tests; the first one took place in the laboratory and the second one in the real environment under different noise conditions. A simple click sound was employed and processed with non-individual head related transfer functions. The sounds were delivered by a system with a maximum azimuth of 32◦ to the left side and 32◦ to the right side of the participant s head at a distance ranging from 0.3 m up to 5 m. The present paper describes the experimental methods and results of virtual sound localization by blind people through the use of a simple electronic travel aid based on an infrared laser pulse and the time of flight distance measurement principle. The lack of vision is often compensated by other perceptual abilities, such as the tactile or hearing ability. The results show that blind people easily perceive and localize binaural sounds and assimilate them with sounds from the environment.Dunai Dunai, L.; Lengua Lengua, I.; Peris Fajarnes, G.; Brusola Simón, F. (2015). Virtual Sound Localization by Blind People. Archives of Acoustics. 40(4):561-567. doi:10.1515/aoa-2015-0055S56156740
    • …
    corecore