71 research outputs found

    Information Theory and Its Application in Machine Condition Monitoring

    Get PDF
    Condition monitoring of machinery is one of the most important aspects of many modern industries. With the rapid advancement of science and technology, machines are becoming increasingly complex. Moreover, an exponential increase of demand is leading an increasing requirement of machine output. As a result, in most modern industries, machines have to work for 24 hours a day. All these factors are leading to the deterioration of machine health in a higher rate than before. Breakdown of the key components of a machine such as bearing, gearbox or rollers can cause a catastrophic effect both in terms of financial and human costs. In this perspective, it is important not only to detect the fault at its earliest point of inception but necessary to design the overall monitoring process, such as fault classification, fault severity assessment and remaining useful life (RUL) prediction for better planning of the maintenance schedule. Information theory is one of the pioneer contributions of modern science that has evolved into various forms and algorithms over time. Due to its ability to address the non-linearity and non-stationarity of machine health deterioration, it has become a popular choice among researchers. Information theory is an effective technique for extracting features of machines under different health conditions. In this context, this book discusses the potential applications, research results and latest developments of information theory-based condition monitoring of machineries

    Intelligent and Improved Self-Adaptive Anomaly based Intrusion Detection System for Networks

    Get PDF
    With the advent of digital technology, computer networks have developed rapidly at an unprecedented pace contributing tremendously to social and economic development. They have become the backbone for all critical sectors and all the top Multi-National companies. Unfortunately, security threats for computer networks have increased dramatically over the last decade being much brazen and bolder. Intrusions or attacks on computers and networks are activities or attempts to jeopardize main system security objectives, which called as confidentiality, integrity and availability. They lead mostly in great financial losses, massive sensitive data leaks, thereby decreasing efficiency and the quality of productivity of an organization. There is a great need for an effective Network Intrusion Detection System (NIDS), which are security tools designed to interpret the intrusion attempts in incoming network traffic, thereby achieving a solid line of protection against inside and outside intruders. In this work, we propose to optimize a very popular soft computing tool prevalently used for intrusion detection namely Back Propagation Neural Network (BPNN) using a novel machine learning framework called “ISAGASAA”, based on Improved Self-Adaptive Genetic Algorithm (ISAGA) and Simulated Annealing Algorithm (SAA). ISAGA is our variant of standard Genetic Algorithm (GA), which is developed based on GA improved through an Adaptive Mutation Algorithm (AMA) and optimization strategies. The optimization strategies carried out are Parallel Processing (PP) and Fitness Value Hashing (FVH) that reduce execution time, convergence time and save processing power. While, SAA was incorporated to ISAGA in order to optimize its heuristic search. Experimental results based on Kyoto University benchmark dataset version 2015 demonstrate that our optimized NIDS based BPNN called “ANID BPNN-ISAGASAA” outperforms several state-of-art approaches in terms of detection rate and false positive rate. Moreover, improvement of GA through FVH and PP saves processing power and execution time. Thus, our model is very much convenient for network anomaly detection.

    Selection of contributing factors for predicting landslide susceptibility using machine learning and deep learning models

    Full text link
    Landslides are a common natural disaster that can cause casualties, property safety threats and economic losses. Therefore, it is important to understand or predict the probability of landslide occurrence at potentially risky sites. A commonly used means is to carry out a landslide susceptibility assessment based on a landslide inventory and a set of landslide contributing factors. This can be readily achieved using machine learning (ML) models such as logistic regression (LR), support vector machine (SVM), random forest (RF), extreme gradient boosting (Xgboost), or deep learning (DL) models such as convolutional neural network (CNN) and long short time memory (LSTM). As the input data for these models, landslide contributing factors have varying influences on landslide occurrence. Therefore, it is logically feasible to select more important contributing factors and eliminate less relevant ones, with the aim of increasing the prediction accuracy of these models. However, selecting more important factors is still a challenging task and there is no generally accepted method. Furthermore, the effects of factor selection using various methods on the prediction accuracy of ML and DL models are unclear. In this study, the impact of the selection of contributing factors on the accuracy of landslide susceptibility predictions using ML and DL models was investigated. Four methods for selecting contributing factors were considered for all the aforementioned ML and DL models, which included Information Gain Ratio (IGR), Recursive Feature Elimination (RFE), Particle Swarm Optimization (PSO), Least Absolute Shrinkage and Selection Operators (LASSO) and Harris Hawk Optimization (HHO). In addition, autoencoder-based factor selection methods for DL models were also investigated. To assess their performances, an exhaustive approach was adopted,...Comment: Stochastic Environmental Research and Risk Assessmen

    Extensive Huffman-tree-based neural network for the imbalanced dataset and its application in accent recognition

    Get PDF
    To classify the data-set featured with a large number of heavily imbalanced classes, this thesis proposed an Extensive Huffman-Tree Neural Network (EHTNN), which fabricates multiple component neural network-enabled classifiers (e.g., CNN or SVM) using an extensive Huffman tree. Any given node in EHTNN can have arbitrary number of children. Compared with the Binary Huffman-Tree Neural Network (BHTNN), EHTNN may have smaller tree height, involve fewer component neural networks, and demonstrate more flexibility on handling data imbalance. Using a 16-class exponentially imbalanced audio data-set as the benchmark, the proposed EHTNN was strictly assessed based on the comparisons with alternative methods such as BHTNN and single-layer CNN. The experimental results demonstrated promising results about EHTNN in terms of Gini index, Entropy value, and the accuracy derived from hierarchical multiclass confusion matrix

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    Computational Optimizations for Machine Learning

    Get PDF
    The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity

    NOVELTY DETECTION FOR PREDICTIVE MAINTENANCE

    Get PDF
    Since the advent of Industry 4. 0 significant research has been conducted to apply machine learning to the vast array of Internet of Things (IoT) data produced by Industrial Machines. One such topic is to Predictive Maintenance. Unlike some other machine learning domains such as NLP and computer vision, Predictive Maintenance is a relatively new area of focus. Most of the published work demonstrates the effectiveness of supervised classification for predictive maintenance. Some of the challenges highlighted in the literature are the cost and difficulty of obtaining labelled samples for training. Novelty detection is a branch of machine learning that after being trained on normal operations detects if new data comes from the same process or is different, eliminating the requirement to label data points. This thesis applies novelty detection to both a public data set and one that was specifically collected to demonstrate a its application to predictive maintenance. The Local Optimization Factor showed better performance than a One-Class SVM on the public data. It was then applied to data from a 3-D printer and was able to detect faults it had not been trained on showing a slight lift from a random classifier

    A systematic mapping of the advancing use of machine learning techniques for predictive maintenance in the manufacturing sector

    Get PDF
    The increasing availability of data, gathered by sensors and intelligent machines, is chang-ing the way decisions are made in the manufacturing sector. In particular, based on predictive approach and facilitated by the nowadays growing capabilities of hardware, cloud-based solutions, and new learning approaches, maintenance can be scheduled—over cell engagement and resource monitoring—when required, for minimizing (or managing) unexpected equipment failures, improving uptime through less aggressive maintenance schedules, shortening unplanned downtime, reducing excess (direct and indirect) cost, reducing long-term damage to machines and processes, and improve safety plans. With access to increased levels of data (and over learning mechanisms), companies have the capability to conduct statistical tests using machine learning algorithms, in order to uncover root causes of problems previously unknown. This study analyses the maturity level and contributions of machine learning methods for predictive maintenance. An upward trend in publications for predictive maintenance using machine learning techniques was identified with the USA and China leading. A mapping study—steady set until early 2019 data—was employed as a formal and well-structured method to synthesize material and to report on pervasive areas of research. Type of equipment, sensors, and data are mapped to properly assist new researchers in positioning new research activities in the domain of smart maintenance. Hence, in this paper, we focus on data-driven methods for predictive maintenance (PdM) with a comprehensive survey on applications and methods until, for the sake of commenting on stable proposal, 2019 (early included). An equal repartition between evaluation and validation studies was identified, this being a symptom of an immature but growing research area. In addition, the type of contribution is mainly in the form of models and methodologies. Vibrational signal was marked as the most used data set for diagnosis in manufacturing machinery monitoring; furthermore, supervised learning is reported as the most used predictive approach (ensemble learning is growing fast). Neural networks, followed by random forests and support vector machines, were identified as the most applied methods encompassing 40% of publications, of which 67% related to deep neural network with long short-term memory predominance. Notwithstanding, there is no robust approach (no one reported optimal performance over different case tests) that works best for every problem. We finally conclude the research in this area is moving fast to gather a separate focused analysis over the last two years (whenever stable implementations will appear)

    Deep Learning Paradigm and Its Bias for Coronary Artery Wall Segmentation in Intravascular Ultrasound Scans: A Closer Look

    Get PDF
    Background and motivation: Coronary artery disease (CAD) has the highest mortality rate; therefore, its diagnosis is vital. Intravascular ultrasound (IVUS) is a high-resolution imaging solution that can image coronary arteries, but the diagnosis software via wall segmentation and quantification has been evolving. In this study, a deep learning (DL) paradigm was explored along with its bias. Methods: Using a PRISMA model, 145 best UNet-based and non-UNet-based methods for wall segmentation were selected and analyzed for their characteristics and scientific and clinical validation. This study computed the coronary wall thickness by estimating the inner and outer borders of the coronary artery IVUS cross-sectional scans. Further, the review explored the bias in the DL system for the first time when it comes to wall segmentation in IVUS scans. Three bias methods, namely (i) ranking, (ii) radial, and (iii) regional area, were applied and compared using a Venn diagram. Finally, the study presented explainable AI (XAI) paradigms in the DL framework. Findings and conclusions: UNet provides a powerful paradigm for the segmentation of coronary walls in IVUS scans due to its ability to extract automated features at different scales in encoders, reconstruct the segmented image using decoders, and embed the variants in skip connections. Most of the research was hampered by a lack of motivation for XAI and pruned AI (PAI) models. None of the UNet models met the criteria for bias-free design. For clinical assessment and settings, it is necessary to move from a paper-to-practice approach

    A novel face recognition system in unconstrained environments using a convolutional neural network

    Get PDF
    The performance of most face recognition systems (FRS) in unconstrained environments is widely noted to be sub-optimal. One reason for this poor performance may be due to the lack of highly effective image pre-processing approaches, which are typically required before the feature extraction and classification stages. Furthermore, it is noted that only minimal face recognition issues are typically considered in most FRS, thus limiting the wide applicability of most FRS in real-life scenarios. Thus, it is envisaged that developing more effective pre-processing techniques, in addition to selecting the correct features for classification, will significantly improve the performance of FRS. The thesis investigates different research works on FRS, its techniques and challenges in unconstrained environments. The thesis proposes a novel image enhancement technique as a pre-processing approach for FRS. The proposed enhancement technique improves on the overall FRS model resulting into an increased recognition performance. Also, a selection of novel hybrid features has been presented that is extracted from the enhanced facial images within the dataset to improve recognition performance. The thesis proposes a novel evaluation function as a component within the image enhancement technique to improve face recognition in unconstrained environments. Also, a defined scale mechanism was designed within the evaluation function to evaluate the enhanced images such that extreme values depict too dark or too bright images. The proposed algorithm enables the system to automatically select the most appropriate enhanced face image without human intervention. Evaluation of the proposed algorithm was done using standard parameters, where it is demonstrated to outperform existing image enhancement techniques both quantitatively and qualitatively. The thesis confirms the effectiveness of the proposed image enhancement technique towards face recognition in unconstrained environments using the convolutional neural network. Furthermore, the thesis presents a selection of hybrid features from the enhanced image that results in effective image classification. Different face datasets were selected where each face image was enhanced using the proposed and existing image enhancement technique prior to the selection of features and classification task. Experiments on the different face datasets showed increased and better performance using the proposed approach. The thesis shows that putting an effective image enhancement technique as a preprocessing approach can improve the performance of FRS as compared to using unenhanced face images. Also, the right features to be extracted from the enhanced face dataset as been shown to be an important factor for the improvement of FRS. The thesis made use of standard face datasets to confirm the effectiveness of the proposed method. On the LFW face dataset, an improved performance recognition rate was obtained when considering all the facial conditions within the face dataset.Thesis (PhD)--University of Pretoria, 2018.CSIR-DST Inter programme bursaryElectrical, Electronic and Computer EngineeringPhDUnrestricte
    • …
    corecore