1,120 research outputs found

    A Systematic Review of Convolutional Neural Network-Based Structural Condition Assessment Techniques

    Get PDF
    With recent advances in non-contact sensing technology such as cameras, unmanned aerial and ground vehicles, the structural health monitoring (SHM) community has witnessed a prominent growth in deep learning-based condition assessment techniques of structural systems. These deep learning methods rely primarily on convolutional neural networks (CNNs). The CNN networks are trained using a large number of datasets for various types of damage and anomaly detection and post-disaster reconnaissance. The trained networks are then utilized to analyze newer data to detect the type and severity of the damage, enhancing the capabilities of non-contact sensors in developing autonomous SHM systems. In recent years, a broad range of CNN architectures has been developed by researchers to accommodate the extent of lighting and weather conditions, the quality of images, the amount of background and foreground noise, and multiclass damage in the structures. This paper presents a detailed literature review of existing CNN-based techniques in the context of infrastructure monitoring and maintenance. The review is categorized into multiple classes depending on the specific application and development of CNNs applied to data obtained from a wide range of structures. The challenges and limitations of the existing literature are discussed in detail at the end, followed by a brief conclusion on potential future research directions of CNN in structural condition assessment

    Intelligent Feature Extraction, Data Fusion and Detection of Concrete Bridge Cracks: Current Development and Challenges

    Full text link
    As a common appearance defect of concrete bridges, cracks are important indices for bridge structure health assessment. Although there has been much research on crack identification, research on the evolution mechanism of bridge cracks is still far from practical applications. In this paper, the state-of-the-art research on intelligent theories and methodologies for intelligent feature extraction, data fusion and crack detection based on data-driven approaches is comprehensively reviewed. The research is discussed from three aspects: the feature extraction level of the multimodal parameters of bridge cracks, the description level and the diagnosis level of the bridge crack damage states. We focus on previous research concerning the quantitative characterization problems of multimodal parameters of bridge cracks and their implementation in crack identification, while highlighting some of their major drawbacks. In addition, the current challenges and potential future research directions are discussed.Comment: Published at Intelligence & Robotics; Its copyright belongs to author

    Human-Robot Collaboration for Effective Bridge Inspection in the Artificial Intelligence Era

    Get PDF
    Advancements in sensor, Artificial Intelligence (AI), and robotic technologies have formed a foundation to enable a transformation from traditional engineering systems to complex adaptive systems. This paradigm shift will bring exciting changes to civil infrastructure systems and their builders, operators and managers. Funded by the INSPIRE University Transportation Center (UTC), Dr. Qin’s group investigated the holism of an AI-robot-inspector system for bridge inspection. Dr. Qin will discuss the need for close collaboration among the constituent components of the AI-robot-inspector system. In the workplace of bridge inspection using drones, the mobile robotic inspection platform rapidly collected big inspection video data that need to be processed prior to element-level inspections. She will illustrate how human intelligence and artificial intelligence can collaborate in creating an AI model both efficiently and effectively. Obtaining a large amount of expert-annotated data for model training is less desirable, if not unrealistic, in bridge inspection. This INSPIRE project addressed this annotation challenge by developing a semi-supervised self-learning (S3T) algorithm that utilizes a small amount of time and guidance from inspectors to help the model achieve an excellent performance. The project evaluated the improvement in job efficacy produced by the developed AI model. This presentation will conclude by introducing some of the on-going work to achieve the desired adaptability of AI models to new or revised tasks in bridge inspection as the National Bridge Inventory includes over 600,000 bridges of various types in material, shape, and age

    The Use of a Convolutional Neural Network in Detecting Soldering Faults from a Printed Circuit Board Assembly

    Get PDF
    Automatic Optical Inspection (AOI) is any method of detecting defects during a Printed Circuit Board (PCB) manufacturing process. Early AOI methods were based on classic image processing algorithms using a reference PCB. The traditional methods require very complex and inflexible preprocessing stages. With recent advances in the field of deep learning, especially Convolutional Neural Networks (CNN), automating various computer vision tasks has been established. Limited research has been carried out in the past on using CNN for AOI. The present systems are inflexible and require a lot of preprocessing steps or a complex illumination system to improve the accuracy. This paper studies the effectiveness of using CNN to detect soldering bridge faults in a PCB assembly. The paper presents a method for designing an optimized CNN architecture to detect soldering faults in a PCBA. The proposed CNN architecture is compared with the state-of-the-art object detection architecture, namely YOLO, with respect to detection accuracy, processing time, and memory requirement. The results of our experiments show that the proposed CNN architecture has a 3.0% better average precision, has 50% less number of parameters and infers in half the time as YOLO. The experimental results prove the effectiveness of using CNN in AOI by using images of a PCB assembly without any reference image, any complex preprocessing stage, or a complex illumination system. Doi: 10.28991/HIJ-2022-03-01-01 Full Text: PD

    An Approach to Semantically Segmenting Building Components and Outdoor Scenes Based on Multichannel Aerial Imagery Datasets

    Get PDF
    As-is building modeling plays an important role in energy audits and retrofits. However, in order to understand the source(s) of energy loss, researchers must know the semantic information of the buildings and outdoor scenes. Thermal information can potentially be used to distinguish objects that have similar surface colors but are composed of different materials. To utilize both the red–green–blue (RGB) color model and thermal information for the semantic segmentation of buildings and outdoor scenes, we deployed and adapted various pioneering deep convolutional neural network (DCNN) tools that combine RGB information with thermal information to improve the semantic and instance segmentation processes. When both types of information are available, the resulting DCNN models allow us to achieve better segmentation performance. By deploying three case studies, we experimented with our proposed DCNN framework, deploying datasets of building components and outdoor scenes, and testing the models to determine whether the segmentation performance had improved or not. In our observation, the fusion of RGB and thermal information can help the segmentation task in specific cases, but it might also make the neural networks hard to train or deteriorate their prediction performance in some cases. Additionally, different algorithms perform differently in semantic and instance segmentation

    LMBiS-Net: A Lightweight Multipath Bidirectional Skip Connection based CNN for Retinal Blood Vessel Segmentation

    Full text link
    Blinding eye diseases are often correlated with altered retinal morphology, which can be clinically identified by segmenting retinal structures in fundus images. However, current methodologies often fall short in accurately segmenting delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on repeated convolution and pooling operations can hinder the representation of edge information, ultimately limiting overall segmentation accuracy. In this paper, we propose a lightweight pixel-level CNN named LMBiS-Net for the segmentation of retinal vessels with an exceptionally low number of learnable parameters \textbf{(only 0.172 M)}. The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. Additionally, we have optimized the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimization significantly reduces training time and enhances computational efficiency. To assess the robustness and generalizability of LMBiS-Net, we performed comprehensive evaluations on various aspects of retinal images. Specifically, the model was subjected to rigorous tests to accurately segment retinal vessels, which play a vital role in ophthalmological diagnosis and treatment. By focusing on the retinal blood vessels, we were able to thoroughly analyze the performance and effectiveness of the LMBiS-Net model. The results of our tests demonstrate that LMBiS-Net is not only robust and generalizable but also capable of maintaining high levels of segmentation accuracy. These characteristics highlight the potential of LMBiS-Net as an efficient tool for high-speed and accurate segmentation of retinal images in various clinical applications
    • …
    corecore