89 research outputs found

    Multi-camera simultaneous localization and mapping

    Get PDF
    In this thesis, we study two aspects of simultaneous localization and mapping (SLAM) for multi-camera systems: minimal solution methods for the scaled motion of non-overlapping and partially overlapping two camera systems and enabling online, real-time mapping of large areas using the parallelism inherent in the visual simultaneous localization and mapping (VSLAM) problem. We present the only existing minimal solution method for six degree of freedom structure and motion estimation using a non-overlapping, rigid two camera system with known intrinsic and extrinsic calibration. One example application of our method is the three-dimensional reconstruction of urban scenes from video. Because our method does not require the cameras' fields-of-view to overlap, we are able to maximize coverage of the scene and avoid processing redundant, overlapping imagery. Additionally, we developed a minimal solution method for partially overlapping stereo camera systems to overcome degeneracies inherent to non-overlapping two-camera systems but still have a wide total field of view. The method takes two stereo images as its input. It uses one feature visible in all four views and three features visible across two temporal view pairs to constrain the system camera's motion. We show in synthetic experiments that our method creates rotation and translation estimates that are more accurate than the perspective three-point method as the overlap in the stereo camera's fields-of-view is reduced. A final part of this thesis is the development of an online, real-time visual SLAM system that achieves real-time speed by exploiting the parallelism inherent in the VSLAM problem. We show that feature tracking, relative pose estimation, and global mapping operations such as loop detection and loop correction can be effectively parallelized. Additionally, we demonstrate that a combination of short baseline, differentially tracked corner features, which can be tracked at high frame rates and wide baseline matchable but slower to compute features such as the scale-invariant feature transform can facilitate high speed visual odometry and at the same time support location recognition for loop detection and global geometric error correction

    Modeling and Calibrating the Distributed Camera

    Get PDF
    Structure-from-Motion (SfM) is a powerful tool for computing 3D reconstructions from images of a scene and has wide applications in computer vision, scene recognition, and augmented and virtual reality. Standard SfM pipelines make strict assumptions about the capturing devices in order to simplify the process for estimating camera geometry and 3D structure. Specifically, most methods require monocular cameras with known focal length calibration. When considering large-scale SfM from internet photo collections, EXIF calibrations cannot be used reliably. Further, the requirement of single camera systems limits the scalability of SfM. This thesis proposes to remove these constraints by instead considering the collection of cameras as a distributed camera that encapsulates the image and geometric information of all cameras simultaneously. First, I provide full generalizations to the relative camera pose and absolute camera pose problems. These generalizations are more expressive and extend the traditional single-camera problems to distributed cameras, forming the basis for a novel hierarchical SfM pipeline that exhibits state-of-the-art performance on large-scale datasets. Second, I describe two efficient methods for estimating camera focal lengths for the distributed camera when calibration is not available. Finally, I show how removing these constraints enables a simpler, more scalable SfM pipeline that is capable of handling uncalibrated cameras at scale

    UAV-Enabled Surface and Subsurface Characterization for Post-Earthquake Geotechnical Reconnaissance

    Full text link
    Major earthquakes continue to cause significant damage to infrastructure systems and the loss of life (e.g. 2016 Kaikoura, New Zealand; 2016 Muisne, Ecuador; 2015 Gorkha, Nepal). Following an earthquake, costly human-led reconnaissance studies are conducted to document structural or geotechnical damage and to collect perishable field data. Such efforts are faced with many daunting challenges including safety, resource limitations, and inaccessibility of sites. Unmanned Aerial Vehicles (UAV) represent a transformative tool for mitigating the effects of these challenges and generating spatially distributed and overall higher quality data compared to current manual approaches. UAVs enable multi-sensor data collection and offer a computational decision-making platform that could significantly influence post-earthquake reconnaissance approaches. As demonstrated in this research, UAVs can be used to document earthquake-affected geosystems by creating 3D geometric models of target sites, generate 2D and 3D imagery outputs to perform geomechanical assessments of exposed rock masses, and characterize subsurface field conditions using techniques such as in situ seismic surface wave testing. UAV-camera systems were used to collect images of geotechnical sites to model their 3D geometry using Structure-from-Motion (SfM). Key examples of lessons learned from applying UAV-based SfM to reconnaissance of earthquake-affected sites are presented. The results of 3D modeling and the input imagery were used to assess the mechanical properties of landslides and rock masses. An automatic and semi-automatic 2D fracture detection method was developed and integrated with a 3D, SfM, imaging framework. A UAV was then integrated with seismic surface wave testing to estimate the shear wave velocity of the subsurface materials, which is a critical input parameter in seismic response of geosystems. The UAV was outfitted with a payload release system to autonomously deliver an impulsive seismic source to the ground surface for multichannel analysis of surface waves (MASW) tests. The UAV was found to offer a mobile but higher-energy source than conventional seismic surface wave techniques and is the foundational component for developing the framework for fully-autonomous in situ shear wave velocity profiling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145793/1/wwgreen_1.pd

    Parallel Tracking and Mapping for Manipulation Applications with Golem Krang

    Get PDF
    Implementing a simultaneous localization and mapping system and an image semantic segmentation method on a mobile manipulation. The application of the SLAM is working towards navigating among obstacles in unknown environments. The object detection method will be integrated for future manipulation tasks such as grasping. This work will be demonstrated on a real robotics hardware system in the lab.Outgoin

    Towards Plug-n-Play robot guidance: Advanced 3D estimation and pose estimation in Robotic applications

    Get PDF
    • …
    corecore