54 research outputs found

    Dead End Body Component Inspections With Convolutional Neural Networks Using UAS Imagery

    Get PDF
    This work presents a novel system utilizing previously developed convolutional neural network (CNN) architectures to aid in automating maintenance inspections of the dead-end body component (DEBC) from high-tension power lines. To maximize resolution of inspection images gathered via unmanned aerial systems (UAS), two different CNNs were developed. One to detect and crop the DEBC from an image. The second to classify the likelihood the component in question contains a crack. The DEBC detection CNN utilized a Python implementation of Faster R-CNN fine-tuned for three classes via 270 inspection photos collected during UAS inspection, alongside 111 images from provided simulated imagery. The data was augmented to develop 2,707 training images. The detection was tested with 111 UAS inspections images. The resulting CNN was capable of 97.8% accuracy in detecting and cropping DEBC welds. To train the classification CNN if the DEBC weld region cropped from the DEBC detection CNN was cracked, 1,149 manually cropped images from both the simulated images, as well images collected of components previously replaced both inside and outside a warehouse, were augmented to provide a training set of 4,632 images. The crack detection network was developed using the VGG16 model implemented with the Caffe framework. Training and testing of the crack detection CNNs performance was accomplished using a random 5-fold cross validation strategy resulting in an overall 98.8% accuracy. Testing the combined object detection and crack classification networks on the same 5-fold cross validation test images resulted in an average accuracy of 73.79%

    Deep learning technology for weld defects classification based on transfer learning and activation features

    Get PDF
    Weld defects detection using X-ray images is an effective method of nondestructive testing. Conventionally, this work is based on qualified human experts, although it requires their personal intervention for the extraction and classification of heterogeneity. Many approaches have been done using machine learning (ML) and image processing tools to solve those tasks. Although the detection and classification have been enhanced with regard to the problems of low contrast and poor quality, their result is still unsatisfying. Unlike the previous research based on ML, this paper proposes a novel classification method based on deep learning network. In this work, an original approach based on the use of the pretrained network AlexNet architecture aims at the classification of the shortcomings of welds and the increase of the correct recognition in our dataset. Transfer learning is used as methodology with the pretrained AlexNet model. For deep learning applications, a large amount of X-ray images is required, but there are few datasets of pipeline welding defects. For this, we have enhanced our dataset focusing on two types of defects and augmented using data augmentation (random image transformations over data such as translation and reflection). Finally, a fine-tuning technique is applied to classify the welding images and is compared to the deep convolutional activation features (DCFA) and several pretrained DCNN models, namely, VGG-16, VGG-19, ResNet50, ResNet101, and GoogLeNet. The main objective of this work is to explore the capacity of AlexNet and different pretrained architecture with transfer learning for the classification of X-ray images. The accuracy achieved with our model is thoroughly presented. The experimental results obtained on the weld dataset with our proposed model are validated using GDXray database. The results obtained also in the validation test set are compared to the others offered by DCNN models, which show a best performance in less time. This can be seen as evidence of the strength of our proposed classification model.This work has been partially funded by the Spanish Government through Project RTI2018-097088-B-C33 (MINECO/FEDER, UE)

    Using deep learning for defect classification on a small weld X-ray image dataset

    Get PDF
    This document provides a comparative evaluation of the performance of a deep learning network for different combinations of parameters and hyper-parameters. Although there are numerous studies that report on performance in deep learning networks for ordinary data sets, their performance on small data sets is much less evaluated. The objective of this work is to demonstrate that such a challenging small data set, such as a welding X-ray image data set, can be trained and evaluated obtaining high precision and that it is possible thanks to data augmentation. In fact, this article shows that data augmentation, also a typical technique in any learning process on a large data set, plus that two image channels, such as channels B (blue) and G (green), both are replaced by the Canny edge map and a binary image provided by an adaptive Gaussian threshold, respectively, gives to the network a 3% increase in accuracy, approximately. In summary, the objective of this work is to present the methodology used and the results obtained to estimate the classification accuracy of three main classes of welding defects obtained on a small set of welding X-ray image data.The authors wants to acknowledge the work of the rest of the participants in this project, namely: J.A. López-Alcantud, P. Rubio-Ibañez, Universidad Politécnica de Cartagena, J.A. Díaz-Madrid, Centro Universitario de la Defensa - UPCT and T.J. Kazmierski, University of Southampton. This work has been partially funded by Spanish government through project numbered RTI2018-097088-B-C33 (MINECO/FEDER,UE)

    Flaw Detection in Ultrasonic Data Using Deep Learning

    Get PDF

    Flaw Detection in Ultrasonic Data Using Deep Learning

    Get PDF

    VGG16 Transfer Learning Architecture for Salak Fruit Quality Classification

    Get PDF
    Purpose: This study aims to differentiate the quality of salak fruit with machine learning. Salak is classified into two classes, good and bad class.Design/methodology/approach: The algorithm used in this research is transfer learning with the VGG16 architecture. Data set used in this research consist of 370 images of salak, 190 from good class and 180 from bad class. The image is preprocessed by resizing and normalizing pixel value in the image. Preprocessed images is split into 80% training data and 20% testing data. Training data is trained by using pretrained VGG16 model. The parameters that are changed during the training are epoch, momentum, and learning rate. The resulting model is then used for testing. The accuracy, precision and recall is monitored to determine the best model to classify the images.Findings/result: The highest accuracy obtained from this study is 95.83%. This accuracy is obtained by using a learning rate = 0.0001 and momentum 0.9. The precision and recall for this model is 97.2 and 94.6.Originality/value/state of the art: The use of transfer learning to classify salak which never been used before

    Machine Learning Assisted Gait Analysis for the Determination of Handedness in Able-bodied People

    Get PDF
    This study has investigated the potential application of machine learning for video analysis, with a view to creating a system which can determine a person’s hand laterality (handedness) from the way that they walk (their gait). To this end, the convolutional neural network model VGG16 underwent transfer learning in order to classify videos under two ‘activities’: “walking left-handed” and “walking right-handed”. This saw varying degrees of success across five transfer learning trained models: Everything – the entire dataset; FiftyFifty – the dataset with enough right-handed samples removed to produce a set with parity between activities; Female – only the female samples; Male – only the male samples; Uninjured – samples declaring no injury within the last year. The initial phase of this study involved a data collection scheme, as a suitable, pre-existing dataset could not be found to be available. This data collection resulted in 45 participants (7 left-handed, and 38 right-handed. 0 identified as ambidextrous), which resulted in 180 sample videos for use in transfer learning and testing the five produced models. The video samples were recorded to obtain the volunteers’ walking pattern, head to toe, in profile rather than head on. This was to allow the models to obtain as much information about arm and leg movement as possible when it came to analysis. The findings of this study showed that accurate models could be produced. However, this varied substantially depending on the specific sub-dataset selected. Using the entire dataset was found to present the least accuracy (as well as the subset which removed any volunteers reporting injury within the last year). This resulted in a system which would classify all samples as ‘Right’. In contrast the models produced observing the female volunteers (the gender which also provided the highest number of left-handed data samples) was consistently accurate, with a mean accuracy of 75.44%. The course of this study has shown that training such a model to give an accurate result is possible, yet difficult to achieve with such a small sample size containing such a iii small population of left-handed individuals. From the results obtained, it appears that a population has a requirement of \u3e~21% being left-handed in order to begin to see accuracy in laterality determination. These limited successes have shown that there is promise to be found in such a study. Although a larger, more wide-spread undertaking would be necessary to definitively show this

    Transfer Learning Based Fault Detection for Suspension System Using Vibrational Analysis and Radar Plots

    Get PDF
    The suspension system is of paramount importance in any automobile. Thanks to the suspension system, every journey benefits from pleasant rides, stable driving and precise handling. However, the suspension system is prone to faults that can significantly impact the driving quality of the vehicle. This makes it essential to find and diagnose any faults in the suspension system and rectify them immediately. Numerous techniques have been used to identify and diagnose suspension faults, each with drawbacks. This paper’s proposed suspension fault detection system aims to detect these faults using deep transfer learning techniques instead of the time-consuming and expensive conventional methods. This paper used pre-trained networks such as Alex Net, ResNet-50, Google Net and VGG16 to identify the faults using radar plots of the vibration signals generated by the suspension system in eight cases. The vibration data were acquired using an accelerometer and data acquisition system placed on a test rig for eight different test conditions (seven faulty, one good). The deep learning model with the highest accuracy in identifying and detecting faults among the four models was chosen and adopted to find defects. The results state that VGG16 produced the highest classification accuracy of 96.70%

    Machine Learning Modeling for Image Segmentation in Manufacturing and Agriculture Applications

    Get PDF
    Doctor of PhilosophyDepartment of Industrial & Manufacturing Systems EngineeringShing I ChangThis dissertation focuses on applying machine learning (ML) modelling for image segmentation tasks of various applications such as additive manufacturing monitoring, agricultural soil cover classification, and laser scribing quality control. The proposed ML framework uses various ML models such as gradient boosting classifier and deep convolutional neural network to improve and automate image segmentation tasks. In recent years, supervised ML methods have been widely adopted for imaging processing applications in various industries. The presence of cameras installed in production processes has generated a vast amount of image data that can potentially be used for process monitoring. Specifically, deep supervised machine learning models have been successfully implemented to build automatic tools for filtering and classifying useful information for process monitoring. However, successful implementations of deep supervised learning algorithms depend on several factors such as distribution and size of training data, selected ML models, and consistency in the target domain distribution that may change based on different environmental conditions over time. The proposed framework takes advantage of general-purposed, trained supervised learning models and applies them for process monitoring applications related to manufacturing and agriculture. In Chapter 2, a layer-wise framework is proposed to monitor the quality of 3D printing parts based on top-view images. The proposed statistical process monitoring method starts with self-start control charts that require only two successful initial prints. Unsupervised machine learning methods can be used for problems in which high accuracy is not required, but statistical process monitoring usually demands high classification accuracies to avoid Type I and II errors. Answering the challenges of image processing using unsupervised methods due to lighting, a supervised Gradient Boosting Classifier (GBC) with 93 percent accuracy is adopted to classify each printed layer from the printing bed. Despite the power of GBC or other decision-tree-based ML models to comparable to unsupervised ML models, their capability is limited in terms of accuracy and running time for complex classification problems such as soil cover classification. In Chapter 3, a deep convolutional neural network (DCNN) for semantic segmentation is trained to quantify and monitor soil coverage in agricultural fields. The trained model is capable of accurately quantifying green canopy cover, counting plants, and classifying stubble. Due to the wide variety of scenarios in a real agricultural field, 3942 high-resolution images were collected and labeled for training and test data set. The difficulty and hardship of collecting, cleaning, and labeling the mentioned dataset was the motivation to find a better approach to alleviate data-wrangling burden for any ML model training. One of the most influential factors is the need for a high volume of labeled data from an exact problem domain in terms of feature space and distributions of data of all classes. Image data preparation for deep learning model training is expensive in terms of the time for labelling due to tedious manual processing. Multiple human labelers can work simultaneously but inconsistent labeling will generate a training data set that often compromises model performance. In addition, training a ML model for a complication problem from scratch will also demand vast computational power. One of the potential approaches for alleviating data wrangling challenges is transfer learning (TL). In Chapter 4, a TL approach was adopted for monitoring three laser scribing characteristics – scribe width, straightness, and debris to answer these challenges. The proposed transfer deep convolutional neural network (TDCNN) model can reduce timely and costly processing of data preparation. The proposed framework leverages a deep learning model already trained for a similar problem and only uses 21 images generated gleaned from the problem domain. The proposed TDCNN overcame the data challenge by leveraging the DCNN model called VGG16 already trained for basic geometric features using more than two million pictures. Appropriate image processing techniques were provided to measure scribe width and line straightness as well as total scribe and debris area using classified images with 96 percent accuracy. In addition to the fact that the TDCNN is functioning with less trainable parameters (i.e., 5 million versus 15 million for VGG16), increasing training size to 154 did not provide significant improvement in accuracy that shows the TDCNN does not need high volume of data to be successful. Finally, chapter 5 summarizes the proposed work and lays out the topics for future research
    • 

    corecore