120,789 research outputs found

    Development of a Computer Vision-Based Three-Dimensional Reconstruction Method for Volume-Change Measurement of Unsaturated Soils during Triaxial Testing

    Get PDF
    Problems associated with unsaturated soils are ubiquitous in the U.S., where expansive and collapsible soils are some of the most widely distributed and costly geologic hazards. Solving these widespread geohazards requires a fundamental understanding of the constitutive behavior of unsaturated soils. In the past six decades, the suction-controlled triaxial test has been established as a standard approach to characterizing constitutive behavior for unsaturated soils. However, this type of test requires costly test equipment and time-consuming testing processes. To overcome these limitations, a photogrammetry-based method has been developed recently to measure the global and localized volume-changes of unsaturated soils during triaxial test. However, this method relies on software to detect coded targets, which often requires tedious manual correction of incorrectly coded target detection information. To address the limitation of the photogrammetry-based method, this study developed a photogrammetric computer vision-based approach for automatic target recognition and 3D reconstruction for volume-changes measurement of unsaturated soils in triaxial tests. Deep learning method was used to improve the accuracy and efficiency of coded target recognition. A photogrammetric computer vision method and ray tracing technique were then developed and validated to reconstruct the three-dimensional models of soil specimen

    A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving

    Full text link
    3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic calibration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed
    • …
    corecore