61 research outputs found

    A study of hough transform for weld extraction

    Get PDF
    The process of joining metals is called welding. At times, selecting a poor quality material or improper usage of welding technologies may cause defects in welded joints. Some of these welded joints have to be tested nondestructively, because their failure can cause lot of damage, for instance in power plants. Radiography is a very common method for non-destructive testing of welds. It is done by certified weld inspectors who have knowledge about weld flaws, looking at the radiograph of the welded joint with naked eye. The judgment of the weld inspector can be biased; subjective, because it is dependent on his/her experience. This manual method can also become very time consuming. Many researches were exploring computer aided examination of radiographic images in early 1990’s. With much advancement in computer vision and image processing technologies, they are being used to find more effective ways of automatic weld inspection. These days, fuzzy based methods are being widely used in this area too. The first step in automatic weld inspection is to locate the welds or find a Region of Interest (ROI) in the radiographic image [7]. In this thesis, a Standard Hough Transform (SHT) based methodology is developed for weld extraction. Firstly, we have done binarization of image to remove the background and non-welds. For binarization, optimal binary threshold is found by a metaheuristic –Simulated annealing. Secondly, we use SHT to generate the Hough Transform matrix of all non-zero points in the binary image. Thirdly, we have explored two different paths to find a meaningful set of lines in the binarized image that are welds. Finally, these lines are verified as weld using a weld-peak detection procedure. Weld-peak detection is also helpful to remove any non-welds that were remaining. We have used 25 digitized radiographic images containing 100 welds to test the method in terms of true detection and false alarm rate

    Vision Based For Classification Of MIG Butt Welding Joint Defect Using Occurrence Matrices And Gray Absolute Histogram

    Get PDF
    This paper will introduced a new approach of vision which is enable to overcome the problems in the vision inspection systems. This system uses 2D gray pixels coocurrence matrix and gray absolute histogram of edge amplitude as the input features extract from the MIG butt welding joints. Images of the welding surfaces are captured using one CCD camera that is mounted on the top which is parallel with the work benches. The images are segmented and the 2D gray value coocurrence matrix consists of energy, correlation, homogeneity and contract, and absolute histogram of the characteristic feature in these images will be calculated. The same process will be applied in zooming image factor by 0.5 to calculated the next characteristic feature values. Finally both feature value is used as the input value in GMM and MLP classifier to classify the welds defect into three categories which are good weld, excess weld and insufficient weld. Results are taken from the 18 MIG butt welding joints samples were tested in overall accuracy recognition rate for MLP is 94.4 % while for GMM is 83.3%. In terms of total computation time, the overall time for MLP is 1.96 m/s and GMM is 1.175 m/s

    Characteristics of butt welding imperfections joint using co-occurrence matrix

    Get PDF
    1164-1169The goal of this paper is to study the characteristics of the butt joint imperfections with different types of joint shapes (curve, straight and tooth saw work piece) according to their class categories (good welds, excess welds, insufficient welds and no welds). The work piece is placed in a center position on the workbench. The distance between camera and workpiece is set as 300 mm during welding imperfections process and the entire work piece image is taken from the same distance to maintain the accuracy. The input feature vector is determined by feature co-occurrence matrix consisting of energy, correlation, homogeneity and contrast both no scaled and scaled by 0.5. Results show that no welds class categories exhibit higher homogeneity compared to the other class categories. This is because the homogeneity value depends on bright and dark parts of a certain size and also include some changes from dark to bright. Meanwhile, insufficient welds class categories produced larger contrast value, but good weld class categories recorded higher contrast value

    Using deep learning for defect classification on a small weld X-ray image dataset

    Get PDF
    This document provides a comparative evaluation of the performance of a deep learning network for different combinations of parameters and hyper-parameters. Although there are numerous studies that report on performance in deep learning networks for ordinary data sets, their performance on small data sets is much less evaluated. The objective of this work is to demonstrate that such a challenging small data set, such as a welding X-ray image data set, can be trained and evaluated obtaining high precision and that it is possible thanks to data augmentation. In fact, this article shows that data augmentation, also a typical technique in any learning process on a large data set, plus that two image channels, such as channels B (blue) and G (green), both are replaced by the Canny edge map and a binary image provided by an adaptive Gaussian threshold, respectively, gives to the network a 3% increase in accuracy, approximately. In summary, the objective of this work is to present the methodology used and the results obtained to estimate the classification accuracy of three main classes of welding defects obtained on a small set of welding X-ray image data.The authors wants to acknowledge the work of the rest of the participants in this project, namely: J.A. López-Alcantud, P. Rubio-Ibañez, Universidad Politécnica de Cartagena, J.A. Díaz-Madrid, Centro Universitario de la Defensa - UPCT and T.J. Kazmierski, University of Southampton. This work has been partially funded by Spanish government through project numbered RTI2018-097088-B-C33 (MINECO/FEDER,UE)

    Industrial X-ray Image Analysis with Deep Neural Networks Robust to Unexpected Input Data

    Get PDF
    X-ray inspection is often an essential part of quality control within quality critical manufacturing industries. Within such industries, X-ray image interpretation is resource intensive and typically conducted by humans. An increased level of automatization would be preferable, and recent advances in artificial intelligence (e.g., deep learning) have been proposed as solutions. However, typically, such solutions are overconfident when subjected to new data far from the training data, so-called out-of-distribution (OOD) data; we claim that safe automatic interpretation of industrial X-ray images, as part of quality control of critical products, requires a robust confidence estimation with respect to OOD data. We explored if such a confidence estimation, an OOD detector, can be achieved by explicit modeling of the training data distribution, and the accepted images. For this, we derived an autoencoder model trained unsupervised on a public dataset with X-ray images of metal fusion welds and synthetic data. We explicitly demonstrate the dangers with a conventional supervised learning-based approach and compare it to the OOD detector. We achieve true positive rates of around 90% at false positive rates of around 0.1% on samples similar to the training data and correctly detect some example OOD data

    Detecting Defects in Digital Radiographic Images

    Get PDF
    It has been noticed that digital x-ray images of faulty welds in pipes tend to be darker than the rest of the image. Rather than simple thresholding, in this work a light pixel is converted to white if there are light pixels within its immediate neighborhood. The effect is that the flaw appears black and the background appears white, this enabling the flaw to be easily detected. However, this method will have the effect of eroding any rough edges on the flaw i.e. black pixels that stick out from the main body of the flaw. This method works well for large flaws, while not with fine cracks

    Feature Extraction and Classification of Flaws in Radio Graphical Weld Images Using ANN

    Get PDF
    In this paper, a novel approach for the detection and classification of flaws in weld images is presented. Computer based weld image analysis is most significant method. The method has been applied for detecting and discriminating flaws in the weld that may corresponds false alarms or all possible nine types of weld defects (Slag Inclusion, Wormhole, Porosity, Incomplete penetration, Under cuts, Cracks, Lack of fusion, Weaving fault Slag line), after being successfully tested on80 radiographic images obtained from EURECTEST, International scientific Association Brussels, Belgium, and 24 radiographs of ship weld provided by Technic Control Co. (Poland) were used, obtained from Ioannis Valavanis Greece.. The procedure to detect all the types of flaws and feature extraction is implemented by segmentation algorithm which can overcome computer complexity problem. Our problem focuses on the high performance classification by optimization of feature set by various selection algorithms like sequential forward search (SFS), sequential backward search algorithm (SBS) and sequential forward floating search algorithm (SFFS). Features are important for measuring parameters which leads in directional to understand image. We introduced 23 geometric features, and 14 texture features. The Experimental results show that our proposed method gives good performance of radiographic images

    Deep learning technology for weld defects classification based on transfer learning and activation features

    Get PDF
    Weld defects detection using X-ray images is an effective method of nondestructive testing. Conventionally, this work is based on qualified human experts, although it requires their personal intervention for the extraction and classification of heterogeneity. Many approaches have been done using machine learning (ML) and image processing tools to solve those tasks. Although the detection and classification have been enhanced with regard to the problems of low contrast and poor quality, their result is still unsatisfying. Unlike the previous research based on ML, this paper proposes a novel classification method based on deep learning network. In this work, an original approach based on the use of the pretrained network AlexNet architecture aims at the classification of the shortcomings of welds and the increase of the correct recognition in our dataset. Transfer learning is used as methodology with the pretrained AlexNet model. For deep learning applications, a large amount of X-ray images is required, but there are few datasets of pipeline welding defects. For this, we have enhanced our dataset focusing on two types of defects and augmented using data augmentation (random image transformations over data such as translation and reflection). Finally, a fine-tuning technique is applied to classify the welding images and is compared to the deep convolutional activation features (DCFA) and several pretrained DCNN models, namely, VGG-16, VGG-19, ResNet50, ResNet101, and GoogLeNet. The main objective of this work is to explore the capacity of AlexNet and different pretrained architecture with transfer learning for the classification of X-ray images. The accuracy achieved with our model is thoroughly presented. The experimental results obtained on the weld dataset with our proposed model are validated using GDXray database. The results obtained also in the validation test set are compared to the others offered by DCNN models, which show a best performance in less time. This can be seen as evidence of the strength of our proposed classification model.This work has been partially funded by the Spanish Government through Project RTI2018-097088-B-C33 (MINECO/FEDER, UE)

    Computer-aided weld inspection by fuzzy modeling with selected features

    Get PDF
    This thesis develops a computer-aided weld inspection methodology based on fuzzy modeling with selected features. The proposed methodology employs several filter feature selection methods for selecting input variables and then builds fuzzy models with the selected features. Our fuzzy modeling method is based on a fuzzy c-means (FCM) variant for the generation of fuzzy terms sets. The implemented FCM variant differs from the original FCM method in two aspects: (1) the two end terms take the maximum and minimum domain values as their centers, and (2) all fuzzy terms are forced to be convex. The optimal number of terms and the optimal shape of the membership function associated with each term are determined based on the mean squared error criterion. The fuzzy model serves as the rule base of a fuzzy reasoning based expert system implemented. In this implementation, first the fuzzy rules are extracted from feature data one feature at a time based on the FCM variant. The total number of fuzzy rules is the product of the fuzzy terms for each feature. The performances of these fuzzy sets are then tested with unseen data in terms of accuracy rates and computational time. To evaluate the goodness of each selected feature subset, the selected combination is used as an input for the proposed fuzzy model. The accuracy of each selected feature subset along with the average error of the selected filter technique is reported. For comparison, the results of all possible combinations of the specified set of feature subsets are also obtained
    corecore