18 research outputs found
Computer vision-based structural assessment exploiting large volumes of images
Visual assessment is a process to understand the state of a structure based on evaluations originating from visual information. Recent advances in computer vision to explore new sensors, sensing platforms and high-performance computing have shed light on the potential for vision-based visual assessment in civil engineering structures. The use of low-cost, high-resolution visual sensors in conjunction with mobile and aerial platforms can overcome spatial and temporal limitations typically associated with other forms of sensing in civil structures. Also, GPU-accelerated and parallel computing offer unprecedented speed and performance, accelerating processing the collected visual data. However, despite the enormous endeavor in past research to implement such technologies, there are still many practical challenges to overcome to successfully apply these techniques in real world situations. A major challenge lies in dealing with a large volume of unordered and complex visual data, collected under uncontrolled circumstance (e.g. lighting, cluttered region, and variations in environmental conditions), while just a tiny fraction of them are useful for conducting actual assessment. Such difficulty induces an undesirable high rate of false-positive and false-negative errors, reducing the trustworthiness and efficiency of their implementation. To overcome the inherent challenges in using such images for visual assessment, high-level computer vision algorithms must be integrated with relevant prior knowledge and guidance, thus aiming to have similar performance with those of humans conducting visual assessment. Moreover, the techniques must be developed and validated in the realistic context of a large volume of real-world images, which is likely contain numerous practical challenges. In this dissertation, the novel use of computer vision algorithms is explored to address two promising applications of vision-based visual assessment in civil engineering: visual inspection, and visual data analysis for post-disaster evaluation. For both applications, powerful techniques are developed here to enable reliable and efficient visual assessment for civil structures and demonstrate them using a large volume of real-world images collected from actual structures. State-of-art computer vision techniques, such as structure-from-motion and convolutional neural network techniques, facilitate these tasks. The core techniques derived from this study are scalable and expandable to many other applications in vision-based visual assessment, and will serve to close the existing gaps between past research efforts and real-world implementations
Image Scale Estimation Using Surface Textures for Quantitative Visual Inspection
In this study, a learning-based scale estimation technique is proposed to enable quantitative evaluation of inspection regions. The underlying idea is that surface texture of structures (i.e. bridges or buildings) captured on images contains the scale information of the corresponding images, which is represented by pixel per physical dimension (e.g., mm, inch). This allows training a regression model that provides a relationship between surface textures on images and their corresponding scales. Deep convolutional neural network is used to extract scale-related features from the texture patches and estimate their scales. The trained model can be exploited to estimate scales for all images captured from structure surfaces that have similar textures. The capability of the proposed technique is fully demonstrated using data collected from surface textures of three different structures and achieves an overall average scale estimation error of less than 15%
Automated Image Classification for Post-Earthquake Reconnaissance Images
In the aftermath of earthquake events, many reconnaissanceteams are dispatched to collect as much data as possible, movingquickly to capture the damages and failures on our built environments before they are recovered. Unfortunately, only a tiny portionof these images are shared, curated, and utilized. There is a pressing need for a viable visual data organizing or categorizing tool witha minimal manual effort. In this study, we aim to build a system toautomate classifying and analyzing a large volume of post-disastervisual data. Our system called Automated Reconnaissance ImageOrganizer (ARIO) is a web-based tool to automatically categorizing reconnaissance images using a deep convolutional neural net-work and generate a summary report combined with useful metadata. Automated classifiers trained using our ground-truth visualdatabase classify images into various categories that are useful toreadily analyze and document reconnaissance images from post-disaster buildings in the field
Real-time Quantitative Visual Inspection using Extended Reality
In this study, we propose a technique for quantitative visual inspection that can quantify structural damage using extended reality (XR). The XR headset can display and overlay graphical information on the physical space and process the data from the built-in camera and depth sensor. Also, the device permits accessing and analyzing image and video stream in real-time and utilizing 3D meshes of the environment and camera pose information. By leveraging these features for the XR headset, we build a workflow and graphic interface to capture the images, segment damage regions, and evaluate the physical size of damage. A deep learning-based interactive segmentation algorithm called f-BRS was deployed to precisely segment damage regions through the XR headset. A ray-casting algorithm is implemented to obtain 3D locations corresponding to the pixel locations of the damage region on the image. The size of the damage region is computed from the 3D locations of its boundary. The performance of the proposed method is demonstrated through a field experiment at an in-service bridge where spalling damage is present at its abutment. The experiment shows that the proposed method provides sub-centimeter accuracy for the size estimation
Image-Based Collection and Measurements for Construction Pay Items
Prior to each payment to contractors and suppliers, measurements are made to document the actual amount of pay items placed at the site. This manual process has substantial risk for personnel, and could be made more efficient and less prone to human errors. In this project a customized software tool package has been developed to address these concerns. The Pay Item Measurement (PIM) tool package is provided as two complementary tools, the Orthophoto Generation Tool and the Graphical Measurement Tool. PIM has been developed in close cooperation with the advisory committee and field engineers from INDOT, and is specifically designed to incorporate the typical actions that INDOT personnel follow. PIM is intended to generate orthophotos for measurements on a planar surface. User guidelines explain the process of how to collect suitable high-quality images, which is a critical step for successful orthophoto construction. INDOT will also use PIM to identify features and make annotations, and to readily compute distances, perimeters, and areas for documenting and recording. This customized tool package will be most useful, and accurate, when the user guidelines are followed. Several examples are included to demonstrate the characteristics of high-quality image sets that will be successful, and to also provide examples of sets that would fail. Step-by-step instructions are provided to demonstrate the correct use of the tool. An instructional video and sample digital image sets complement this report and tool package
Structural Health Monitoring
Abstract In structural health monitoring, crack identification using scattered ultrasonic waves from a crack is one of the most active research areas. Crack size estimation is important for judging the severity of the damage. If measurements are frequently performed as the crack grows, then a better estimation of crack size may be possible by analyzing sensor signals for the same crack location with different sizes. The objective of this article is to explore the relationship between the sensor signal amplitude and crack size through experiments and simulation for estimating the size. Cracks are machined into an aluminum plate and measurements are carried out with ultrasound excitation using piezoelectric transducer arrays that alternate their role as actuators or sensors. Initially, a hole of 2.5 mm diameter is drilled in the plate, and it is gradually machined to a crack with a size up to 50 mm. Signal amplitude is measured from the sensor arrays. The migration technique is used to image the crack and to find the crack location. The maximum received signal amplitude is found to vary linearly with size from simulation and this agrees with measurements with crack size up to 30 mm. The deviation between the simulation and experiment increases as the crack grows