10,714 research outputs found

    A 3D Reconstruction Algorithm for the Location of Foundations in Demolished Buildings

    Get PDF
    The location of foundations in a demolished building can be accomplished by undertaking a Ground Penetrating Radar (GPR) survey and then to use the GPR data to generate 3D isosurfaces of what was beneath the soil surface using image reconstruction. The SIMCA ('SIMulated Correlation Algorithm') algorithm is a technique based on a comparison between the trace that would be returned by an ideal point reflector in the soil conditions at the site and the actual trace. During an initialization phase, SIMCA carries out radar simulation using the design parameters of the radar and the soil properties. The trace which would be returned by a target under these conditions is then used to form a kernel. Then SIMCA takes the raw data as the radar is scanned over the ground and removes clutter using a clutter removal technique. The system correlates the kernel with the data by carrying out volume correlation and produces 3D images of the surface of subterranean objects detected. The 3D isosurfaces are generated using MATLAB software. The validation of the algorithm has been accomplished by comparing the 3D isosurfaces produced by the SIMCA algorithm, Scheers algorithm and REFLEXW commercial software. Then the depth and the position in the x and y directions as obtained using MATLAB software for each of the cases are compared with the corresponding values approximately obtained from original Architect's drawings of the buildings

    BIM-based mixed-reality application for bridge inspection and maintenance

    Get PDF
    Purpose – The purpose of this study is to develop a BIM-based mixed reality (MR) application to enhance and facilitate the process of managing bridge inspection and maintenance works remotely from office. It aims to address the ineffective decision-making process on maintenance tasks from the conventional method which relies on documents and 2D drawings on visual inspection. This study targets two key issues: creating a BIM-based model for bridge inspection and maintenance; and developing this model in a MR platform based on Microsoft Hololens. Design/methodology/approach – Literature review is conducted to determine the limitation of MR technology in the construction industry and identify the gaps of integration of BIM and MR for bridge inspection works. A new framework for a greater adoption of integrated BIM and Hololens is proposed. It consists of a bridge information model for inspection and a newly-developed Hololens application named “HoloBridge”. This application contains the functional modules that allow users to check and update the progress of inspection and maintenance. The application has been implemented for an existing bridge in South Korea as the case study. Findings – The results from pilot implementation show that the inspection information management can be enhanced because the inspection database can be systematically captured, stored and managed through BIM-based models. The inspection information in MR environment has been improved in interpretation, visualization and visual interpretation of 3D models because of intuitively interactive in real-time simulation. Originality/value – The proposed framework through “HoloBridge” application explores the potential of integrating BIM and MR technology by using Hololens. It provides new possibilities for remote inspection of bridge conditions

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    MeshPipe: a Python-based tool for easy automation and demonstration of geometry processing pipelines

    Get PDF
    The popularization of inexpensive 3D scanning, 3D printing, 3D publishing and AR/VR display technologies have renewed the interest in open-source tools providing the geometry processing algorithms required to clean, repair, enrich, optimize and modify point-based and polygonal-based models. Nowadays, there is a large variety of such open-source tools whose user community includes 3D experts but also 3D enthusiasts and professionals from other disciplines. In this paper we present a Python-based tool that addresses two major caveats of current solutions: the lack of easy-to-use methods for the creation of custom geometry processing pipelines (automation), and the lack of a suitable visual interface for quickly testing, comparing and sharing different pipelines, supporting rapid iterations and providing dynamic feedback to the user (demonstration). From the user's point of view, the tool is a 3D viewer with an integrated Python console from which internal or external Python code can be executed. We provide an easy-to-use but powerful API for element selection and geometry processing. Key algorithms are provided by a high-level C library exposed to the viewer via Python-C bindings. Unlike competing open-source alternatives, our tool has a minimal learning curve and typical pipelines can be written in a few lines of Python code.Peer ReviewedPostprint (published version
    corecore