475 research outputs found

    Toward automated earned value tracking using 3D imaging tools

    Get PDF

    Appearance-based material classification after occlusion removal for operation-level construction progress monitoring

    Get PDF
    Today, the availability of a large number of smart devices on construction sites, has significantly interest popularity of appearance-based methods for automated construction progress using site photographs monitoring. These methods, however, face a number of technical challenges that limit their applicability including low spatial resolution of images, and static and dynamic occlusions due to the construction progress and moving resources (equipment, workers, scaffolding, etc). To address these limitations, this paper extends on an existing model-driven appearance-based material classification method for appearance-based construction progress monitoring using 4D BIM and site photologs. Specifically, it introduces a robust occlusion removal algorithm that can lower false positives in material recognition. The method leverages the depth information from the 4D BIM as well as the 3D point cloud created through Structure from Motion procedures. Once the occluded regions are removed, square-shape patches can be extracted from the back-projection of the BIM elements on site images. These improved image patches are then used in the material recognition pipeline to create a vector quantized histogram of all the material classes. The material class with the highest frequency is chosen as the material type for the element and this appearance information is used to infer the most updated state of progress for the elements. To validate, four existing incomplete and noisy point cloud models from real world construction site images and their corresponding BIMS were used. An extended version of the Construction Material Library (CML) developed at the University of Illinois at Urbana-Champaign’s Real-time and Automated Monitoring and Control (RAAMAC) lab was used to train the material classifiers and the experimental results shows an average accuracy of 90.9%. The occlusion removal and subsequent classification for the four datasets resulted in an accuracy of 92.2% compared to 89.9% in the existing method, demonstrating a definite improvement. By predicting the material present in an element, the status of that element can be identified as “in progress” or “completed’ and compared with the schedule. Since static occlusions are detected, analyzed, and removed, this method has potential to be effective for appearance-based progress monitoring methods and can results in higher accuracy material classification

    Towards an automated photogrammetry-based approach for monitoring and controlling construction site activities

    Get PDF
    © 2018 Elsevier B.V. The construction industry has a poor productivity record, which was predominantly ascribed to inadequate monitoring of how a project is progressing at any given time. Most available approaches do not offer key stakeholders a shared understanding of project performance in real-time, which as a result fail to identify any project slippage on the original schedule. This paper reports on the development of a novel automatic system for monitoring, updating and controlling construction site activities in real-time. The proposed system seeks to harness advances in close-range photogrammetry to deliver an original approach that is capable of continuous monitoring of construction activities, with progress status determined, at any given time, throughout the construction lifecycle. The proposed approach has the potential to identify any deviation of as planned construction schedules, so prompt action can be taken because of an automatic notification system, which informs decision-makers via emails and SMS. This system was rigorously tested in a real-life case study of an in-progress construction site. The findings revealed that the proposed system achieved a significant high level of accuracy and automation, and was relatively cheap and easier to operate

    3D Modeling of the Milreu Roman Heritage with UAVs

    Get PDF
    In this paper we present a methodology to build a 3D model of a roman heritage site in the South of Portugal, known as Milreu, covering a region of about one hectare. Today's Milreu ruins, a national heritage site, were once part of a 4rd century, luxurious villa-style manor house, which was subsequently converted into a thriving farm. Due to its relevance, it is important to make the 3D model of the Milreu ruins, to be available for the exploration in the Web and for virtual and augmented reality applications for mobile devices. This paper demonstrates the use of UAVs for the reconstruction of the 3D models of the ruins from vertical and oblique aerial photographs. To enhance the model quality and precision, terrestrial photographs were also incorporated in the workflow. This model is georeferenced, which give us the possibility to automatically determine accurate measurements of the Roman structures.info:eu-repo/semantics/publishedVersio

    Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns

    Get PDF
    We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognise 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing System

    Towards vision based robots for monitoring built environments

    Get PDF
    In construction, projects are typically behind schedule and over budget, largely due to the difficulty of progress monitoring. Once a structure (e.g. a bridge) is built, inspection becomes an important yet dangerous and costly job. We can provide a solution to both problems if we can simplify or automate visual data collection, monitoring, and analysis. In this work, we focus specifically on improving autonomous image collection, building 3D models from the images, and recognizing materials for progress monitoring using the images and 3D models. Image capture can be done manually, but the process is tedious and better suited for autonomous robots. Robots follow a set trajectory to collect data of a site, but it is unclear if 3D reconstruction will be successful using the images captured by following this trajectory. We introduce a simulator that synthesizes feature tracks for 3D reconstruction to predict if images collected from a planned path will result in a successful 3D reconstruction. This can save time, money, and frustration because robot paths can be altered prior to the real image capture. When executing a planned trajectory, the robot needs to understand and navigate the environment autonomously. Robot navigation algorithms struggle in environments with few distinct features. We introduce a new fiducial marker that can be added to these scenes to increase the number of distinct features and a new detection algorithm that detects the marker with negligible computational overhead. Adding markers prior to data collection does not guarantee that the algorithms for 3D model generation will succeed. In fact, out of the box, these algorithms do not take advantage of the unique characteristics of markers. Thus, we introduce an improved structure from motion approach that takes advantage of marker detections when they are present. We also create a dataset of challenging indoor image collections with markers placed throughout and show that previous methods often fail to produce accurate 3D models. However, our approach produces complete, accurate 3D models for all of these new image collections. Recognizing materials on construction sites is useful for monitoring usage and tracking construction progress. However, it is difficult to recognize materials in real world scenes because shape and appearance vary considerably. Our solution is to introduce the first dataset of material patches that include both image data and 3D geometry. We then show that both independent and joint modeling of geometry are useful alongside image features to improve material recognition. Lastly, we use our material recognition with material priors from building plans to accurately identify progress on construction sites
    corecore