3,923 research outputs found

    Deep CNN-Based Automated Optical Inspection for Aerospace Components

    Get PDF
    ABSTRACT The defect detection problem is of outmost importance in high-tech industries such as aerospace manufacturing and is widely employed using automated industrial quality control systems. In the aerospace manufacturing industry, composite materials are extensively applied as structural components in civilian and military aircraft. To ensure the quality of the product and high reliability, manual inspection and traditional automatic optical inspection have been employed to identify the defects throughout production and maintenance. These inspection techniques have several limitations such as tedious, time- consuming, inconsistent, subjective, labor intensive, expensive, etc. To make the operation effective and efficient, modern automated optical inspection needs to be preferred. In this dissertation work, automatic defect detection techniques are tested on three levels using a novel aerospace composite materials image dataset (ACMID). First, classical machine learning models, namely, Support Vector Machine and Random Forest, are employed for both datasets. Second, deep CNN-based models, such as improved ResNet50 and MobileNetV2 architectures are trained on ACMID datasets. Third, an efficient defect detection technique that combines the features of deep learning and classical machine learning model is proposed for ACMID dataset. To assess the aerospace composite components, all the models are trained and tested on ACMID datasets with distinct sizes. In addition, this work investigates the scenario when defective and non-defective samples are scarce and imbalanced. To overcome the problems of imbalanced and scarce datasets, oversampling techniques and data augmentation using improved deep convolutional generative adversarial networks (DCGAN) are considered. Furthermore, the proposed models are also validated using one of the benchmark steel surface defects (SSD) dataset

    Enhanced Concrete Bridge Assessment Using Artificial Intelligence and Mixed Reality

    Get PDF
    Conventional methods for visual assessment of civil infrastructures have certain limitations, such as subjectivity of the collected data, long inspection time, and high cost of labor. Although some new technologies (i.e. robotic techniques) that are currently in practice can collect objective, quantified data, the inspector\u27s own expertise is still critical in many instances since these technologies are not designed to work interactively with human inspector. This study aims to create a smart, human-centered method that offers significant contributions to infrastructure inspection, maintenance, management practice, and safety for the bridge owners. By developing a smart Mixed Reality (MR) framework, which can be integrated into a wearable holographic headset device, a bridge inspector, for example, can automatically analyze a certain defect such as a crack that he or she sees on an element, display its dimension information in real-time along with the condition state. Such systems can potentially decrease the time and cost of infrastructure inspections by accelerating essential tasks of the inspector such as defect measurement, condition assessment and data processing to management systems. The human centered artificial intelligence (AI) will help the inspector collect more quantified and objective data while incorporating inspector\u27s professional judgment. This study explains in detail the described system and related methodologies of implementing attention guided semi-supervised deep learning into mixed reality technology, which interacts with the human inspector during assessment. Thereby, the inspector and the AI will collaborate/communicate for improved visual inspection

    Potential applications of deep learning in automatic rock joint trace mapping in a rock mass

    Get PDF
    In blasted rock slopes and underground openings, rock joints are visible in different forms. Rock joints are often exposed as planes confining rock blocks and visible as traces on a well-blasted, smooth rock mass surface. A realistic rock joint model should include both visual forms of joints in a rock mass: i.e., both joint traces and joint planes. Imaged-based 2D semantic segmentation using deep learning via the Convolutional Neural Network (CNN) has shown promising results in extracting joint traces in a rock mass. In 3D analysis, research studies using deep learning have demonstrated outperforming results in automatically extracting joint planes from an unstructured 3D point cloud compared to state-of-the-art methods. We discuss a pilot study using 3D true colour point cloud and their source and derived 2D images in this paper. In the study, we aim to implement and compare various CNN-based networks found in the literature for automatic extraction of joint traces from laser scanning and photogrammetry data. Extracted joint traces can then be clustered and connected to potential joint planes as joint objects in a discrete joint model. This can contribute to a more accurate estimation of rock joint persistence. The goal of the study is to compare the efficiency and accuracy between using 2D images and 3D point cloud as input data. Data are collected from two infrastructure projects with blasted rock slopes and tunnels in Norway.Potential applications of deep learning in automatic rock joint trace mapping in a rock masspublishedVersio

    A Training Framework of Robotic Operation and Image Analysis for Decision-Making in Bridge Inspection and Preservation

    Get PDF
    This project aims to create a framework of training engineers and policy makers on robotic operation and image analysis for the inspection and preservation of transportation infrastructure. Specifically, it develops the method for collecting camera-based bridge inspection data and the algorithms for data processing and pattern recognitions; and it creates tools for assisting users on visually analyzing the processed image data and recognized patterns for inspection and preservation decision-making. The project first developed a Siamese Neural Network to support bridge engineers in analyzing big video data. The network was initially trained by one-shot learning and is fine-tuned iteratively with human in the loop. Bridge engineers define the region of interest initially, then the algorithm retrieves all related regions in the video, which facilitates the engineers to inspect the bridge rather than exhaustively check every frame of the video. Our neural network was evaluated on three bridge inspection videos with promising performances. Then, the project developed an assistive intelligence system to facilitate inspectors efficiently and accurately detect and segment multiclass bridge elements from inspection videos. A Mask Region-based Convolutional Neural Network was transferred in the studied problem with a small initial training dataset labeled by the inspector. Then, the temporal coherence analysis was used to recover false negative detections of the transferred network. Finally, self-training with a guidance from experienced inspectors was used to iteratively refine the network. Results from a case study have demonstrated that the proposed method uses just a small amount of time and guidance from experienced inspectors to successfully build the assistive intelligence system with an excellent performance

    Digital reality: a model-based approach to supervised learning from synthetic data

    Get PDF
    Hierarchical neural networks with large numbers of layers are the state of the art for most computer vision problems including image classification, multi-object detection and semantic segmentation. While the computational demands of training such deep networks can be addressed using specialized hardware, the availability of training data in sufficient quantity and quality remains a limiting factor. Main reasons are that measurement or manual labelling are prohibitively expensive, ethical considerations can limit generating data, or a phenomenon in questions has been predicted, but not yet observed. In this position paper, we present the Digital Reality concept are a structured approach to generate training data synthetically. The central idea is to simulate measurements based on scenes that are generated by parametric models of the real world. By investigating the parameter space defined of such models, training data can be generated in a controlled way compared to data that was captured from real world situations. We propose the Digital Reality concept and demonstrate its potential in different application domains, including industrial inspection, autonomous driving, smart grid, and microscopy research in material science and engineering

    Deep Industrial Image Anomaly Detection: A Survey

    Full text link
    The recent rapid development of deep learning has laid a milestone in industrial Image Anomaly Detection (IAD). In this paper, we provide a comprehensive review of deep learning-based image anomaly detection techniques, from the perspectives of neural network architectures, levels of supervision, loss functions, metrics and datasets. In addition, we extract the new setting from industrial manufacturing and review the current IAD approaches under our proposed our new setting. Moreover, we highlight several opening challenges for image anomaly detection. The merits and downsides of representative network architectures under varying supervision are discussed. Finally, we summarize the research findings and point out future research directions. More resources are available at https://github.com/M-3LAB/awesome-industrial-anomaly-detection
    corecore