188 research outputs found

    Accurate rough terrain estimation with space-carving kernels

    Full text link

    Optimization Based Coverage Path Planning for Autonomous 3D Data Acquisition

    Get PDF
    The demand for 3D models that represent real-world objects such as structures and buildings has increased in recent years. It is becoming increasingly important that the reconstructions are not only visually convincing but also feature high geometric accuracy. This includes, for example, the fields of civil engineering, terrestrial surveying and archeology, where precise measurements are made in the models for documentation and analysis purposes. There are different approaches to create such a reconstruction. The photogrammetric method Structure from Motion and laser scanning are among the most widely used methods here, as they do not require a complicated setup and can be used for scenarios at small to large scale. Recent developments are enabling unmanned robotic systems, especially sensor mounted UAVs, to assist in the recording of areas which are otherwise difficult to observe. The demand for a high geometric accuracy, however, comes at the expense of high computational complexity of up to several days. Hence, especially real-time reconstructions are unfeasible, such that recording and reconstruction procedure must be executed consecutively. The resulting model quality, i.e. completeness and accuracy, is only assessable afterwards. Since it is often difficult or even impossible to improve these models with additional measurements afterwards, methods that ensure a reliable acquisition of sufficient data is required. In this thesis we develop new methods and theory that address this problem for the mentioned sensor types. For both, a probabilistic description of the expected surface reconstruction error is maintained cost-efficiently as an estimate for the model quality during the recording procedure. For image sensors this is realized by incrementally constructing confidence ellipsoids that describe the information obtained from all views. With depth sensors the surface quality is described by the variance of a Gaussian process implicit surface regression fit to point cloud data using polyharmonic kernel functions. Sensor poses are then assessed by the information they add to the subsequent reconstruction up to a desired geometric accuracy using a formulation that is motivated from Optimal Experimental Design. This quantity is further used in an iterative next-best-view selection framework as a subproblem of a coverage path planning problem. The general formulations presented in this thesis enables a wide range of applications, such as offline and online view planning or various autonomous robot systems under consideration of dynamic and geometric constraints. We present the first multi-view coverage path planning approach, specifically targeted at autonomous Structure from Motion data acquisition. Its correctness is validated in simulation using the physics simulator Gazebo. Furthermore, we lay a foundation for similar applications with depth sensors. All presented algorithms were developed with scalability in mind and show promising results regarding real-time usability

    Lane estimation for autonomous vehicles using vision and LIDAR

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 109-114).Autonomous ground vehicles, or self-driving cars, require a high level of situational awareness in order to operate safely and eciently in real-world conditions. A system able to quickly and reliably estimate the location of the roadway and its lanes based upon local sensor data would be a valuable asset both to fully autonomous vehicles as well as driver assistance technologies. To be most useful, the system must accommodate a variety of roadways, a range of weather and lighting conditions, and highly dynamic scenes with other vehicles and moving objects. Lane estimation can be modeled as a curve estimation problem, where sensor data provides partial and noisy observations of curves. The number of curves to estimate may be initially unknown and many of the observations may be outliers and false detections (e.g., due to tree shadows or lens are). The challenge is to detect lanes when and where they exist, and to update the lane estimates as new observations are received. This thesis describes algorithms for feature detection and curve estimation, as well as a novel curve representation that permits fast and ecient estimation while rejecting outliers. Locally observed road paint and curb features are fused together in a lane estimation framework that detects and estimates all nearby travel lanes.(cont.) The system handles roads with complex geometries and makes no assumptions about the position and orientation of the vehicle with respect to the roadway. Early versions of these algorithms successfully guided a fully autonomous Land Rover LR3 through the 2007 DARPA Urban Challenge, a 90km urban race course, at speeds up to 40 km/h amidst moving traffic. We evaluate these and subsequent versions with a ground truth dataset containing manually labeled lane geometries for every moment of vehicle travel in two large and diverse datasets that include more than 300,000 images and 44km of roadway. The results illustrate the capabilities of our algorithms for robust lane estimation in the face of challenging conditions and unknown roadways.by Albert S. Huang.Ph.D

    3D Scene Reconstruction with Micro-Aerial Vehicles and Mobile Devices

    Full text link
    Scene reconstruction is the process of building an accurate geometric model of one\u27s environment from sensor data. We explore the problem of real-time, large-scale 3D scene reconstruction in indoor environments using small laser range-finders and low-cost RGB-D (color plus depth) cameras. We focus on computationally-constrained platforms such as micro-aerial vehicles (MAVs) and mobile devices. These platforms present a set of fundamental challenges - estimating the state and trajectory of the device as it moves within its environment and utilizing lightweight, dynamic data structures to hold the representation of the reconstructed scene. The system needs to be computationally and memory-efficient, so that it can run in real time, onboard the platform. In this work, we present three scene reconstruction systems. The first system uses a laser range-finder and operates onboard a quadrotor MAV. We address the issues of autonomous control, state estimation, path-planning, and teleoperation. We propose the multi-volume occupancy grid (MVOG) - a novel data structure for building 3D maps from laser data, which provides a compact, probabilistic scene representation. The second system uses an RGB-D camera to recover the 6-DoF trajectory of the platform by aligning sparse features observed in the current RGB-D image against a model of previously seen features. We discuss our work on camera calibration and the depth measurement model. We apply the system onboard an MAV to produce occupancy-based 3D maps, which we utilize for path-planning. Finally, we present our contributions to a scene reconstruction system for mobile devices with built-in depth sensing and motion-tracking capabilities. We demonstrate reconstructing and rendering a global mesh on the fly, using only the mobile device\u27s CPU, in very large (300 square meter) scenes, at a resolutions of 2-3cm. To achieve this, we divide the scene into spatial volumes indexed by a hash map. Each volume contains the truncated signed distance function for that area of space, as well as the mesh segment derived from the distance function. This approach allows us to focus computational and memory resources only in areas of the scene which are currently observed, as well as leverage parallelization techniques for multi-core processing
    corecore