14,396 research outputs found

    Range sensor based model construction by sparse surface adjustment

    Full text link
    In this paper, we propose an approach to construct highly accurate 3D object models from range data. The main advantage of sensor based model acquisition compared to manual CAD model construction is the short time needed per object. The usual drawbacks of sensor based model reconstruction are sensor noise and errors in the sensor positions which typically lead to less accurate models. Our method drastically reduces this problem by applying a physical model of the underlying range sensor and utilizing a graph-based optimization technique. We present our approach and evaluate it on data recorded in different real world environments with an RGBD camera and a laser range scanner. The experimental results demonstrate that our method provides more accurate maps than standard SLAM methods and that it additionally compares favorable over the moving least squares method. © 2011 IEEE

    Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Full text link
    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, US

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications

    Get PDF
    This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each bone’s edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    A novel cooperative opportunistic routing scheme for underwater sensor networks

    Get PDF
    Increasing attention has recently been devoted to underwater sensor networks (UWSNs) because of their capabilities in the ocean monitoring and resource discovery. UWSNs are faced with different challenges, the most notable of which is perhaps how to efficiently deliver packets taking into account all of the constraints of the available acoustic communication channel. The opportunistic routing provides a reliable solution with the aid of intermediate nodes’ collaboration to relay a packet toward the destination. In this paper, we propose a new routing protocol, called opportunistic void avoidance routing (OVAR), to address the void problem and also the energy-reliability trade-off in the forwarding set selection. OVAR takes advantage of distributed beaconing, constructs the adjacency graph at each hop and selects a forwarding set that holds the best trade-off between reliability and energy efficiency. The unique features of OVAR in selecting the candidate nodes in the vicinity of each other leads to the resolution of the hidden node problem. OVAR is also able to select the forwarding set in any direction from the sender, which increases its flexibility to bypass any kind of void area with the minimum deviation from the optimal path. The results of our extensive simulation study show that OVAR outperforms other protocols in terms of the packet delivery ratio, energy consumption, end-to-end delay, hop count and traversed distance

    Methods and strategies of object localization

    Get PDF
    An important property of an intelligent robot is to be able to determine the location of an object in 3-D space. A general object localization system structure is proposed, some important issues on localization discussed, and an overview given for current available object localization algorithms and systems. The algorithms reviewed are characterized by their feature extracting and matching strategies; the range finding methods; the types of locatable objects; and the mathematical formulating methods
    corecore