27,479 research outputs found

    Software Defect Prediction Using Neural Network Based SMOTE

    Get PDF
    Software defect prediction is a practical approach to improve the quality and efficiency of time and costs for software testing by focusing on defect modules. The defect prediction software dataset naturally has a class imbalance problem with very few defective modules compared to non-defective modules. Class imbalance can reduce performance from classification. In this study, we applied the Neural Networks Based Synthetic Minority Over-sampling Technique (SMOTE) to overcome class imbalances in the six NASA datasets. Neural Network based on SMOTE is a combination of Neural Network and SMOTE with each hyperparameters that are optimized using random search. The results use a nested 5-cross validation show increases Bal by 25.48% and Recall by 45.99% compared to the original Neural Network. We also compare the performance of Neural Network based SMOTE with SMOTE + Traditional Machine Learning Algorithm. The Neural Network based SMOTE takes first place in the average rank

    DefectNET: multi-class fault detection on highly-imbalanced datasets

    Full text link
    As a data-driven method, the performance of deep convolutional neural networks (CNN) relies heavily on training data. The prediction results of traditional networks give a bias toward larger classes, which tend to be the background in the semantic segmentation task. This becomes a major problem for fault detection, where the targets appear very small on the images and vary in both types and sizes. In this paper we propose a new network architecture, DefectNet, that offers multi-class (including but not limited to) defect detection on highly-imbalanced datasets. DefectNet consists of two parallel paths, which are a fully convolutional network and a dilated convolutional network to detect large and small objects respectively. We propose a hybrid loss maximising the usefulness of a dice loss and a cross entropy loss, and we also employ the leaky rectified linear unit (ReLU) to deal with rare occurrence of some targets in training batches. The prediction results show that our DefectNet outperforms state-of-the-art networks for detecting multi-class defects with the average accuracy improvement of approximately 10% on a wind turbine

    Accelerating Defect Predictions in Semiconductors Using Graph Neural Networks

    Full text link
    Here, we develop a framework for the prediction and screening of native defects and functional impurities in a chemical space of Group IV, III-V, and II-VI zinc blende (ZB) semiconductors, powered by crystal Graph-based Neural Networks (GNNs) trained on high-throughput density functional theory (DFT) data. Using an innovative approach of sampling partially optimized defect configurations from DFT calculations, we generate one of the largest computational defect datasets to date, containing many types of vacancies, self-interstitials, anti-site substitutions, impurity interstitials and substitutions, as well as some defect complexes. We applied three types of established GNN techniques, namely Crystal Graph Convolutional Neural Network (CGCNN), Materials Graph Network (MEGNET), and Atomistic Line Graph Neural Network (ALIGNN), to rigorously train models for predicting defect formation energy (DFE) in multiple charge states and chemical potential conditions. We find that ALIGNN yields the best DFE predictions with root mean square errors around 0.3 eV, which represents a prediction accuracy of 98 % given the range of values within the dataset, improving significantly on the state-of-the-art. Models are tested for different defect types as well as for defect charge transition levels. We further show that GNN-based defective structure optimization can take us close to DFT-optimized geometries at a fraction of the cost of full DFT. DFT-GNN models enable prediction and screening across thousands of hypothetical defects based on both unoptimized and partially-optimized defective structures, helping identify electronically active defects in technologically-important semiconductors

    Machine Prognosis with Full Utilization of Truncated Lifetime Data

    Get PDF
    Intelligent machine fault prognostics estimates how soon and likely a failure will occur with little human expert judgement. It minimizes production downtime, spares inventory and maintenance labour costs. Prognostic models, especially probabilistic methods, require numerous historical failure instances. In practice however, industrial and military communities would rarely allow their engineering assets to run to failure. It is only known that the machine component survived up to the time of repair or replacement but there is no information as to when the component would have failed if left undisturbed. Data of this sort are called truncated data. This paper proposes a novel model, the Intelligent Product Limit Estimator (iPLE), which utilizes truncated data to perform adaptive long-range prediction of a machine component's remaining lifetime. It takes advantage of statistical models' ability to provide useful representation of survival probabilities, and of neural networks ability to recognise nonlinear relationships between a machine component's future survival condition and a given series of prognostic data features. Progressive bearing degradation data were simulated and used to train and validate the proposed model. The results support our hypothesis that the iPLE can perform better than similar prognostics models that neglect truncated data
    • …
    corecore