432 research outputs found

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Online change detection techniques in time series: an overview

    Get PDF
    Time-series change detection has been studied in several fields. From sensor data, engineering systems, medical diagnosis, and financial markets to user actions on a network, huge amounts of temporal data are generated. There is a need for a clear separation between normal and abnormal behaviour of the system in order to investigate causes or forecast change. Characteristics include irregularities, deviations, anomalies, outliers, novelties or surprising patterns. The efficient detection of such patterns is challenging, especially when constraints need to be taken into account, such as the data velocity, volume, limited time for reacting to events, and the details of the temporal sequence.This paper reviews the main techniques for time series change point detection, focusing on online methods. Performance criteria including complexity, time granularity, and robustness is used to compare techniques, followed by a discussion about current challenges and open issue

    An Adaptive Nonparametric Modeling Technique for Expanded Condition Monitoring of Processes

    Get PDF
    New reactor designs and the license extensions of the current reactors has created new condition monitoring challenges. A major challenge is the creation of a data-based model for a reactor that has never been built or operated and has no historical data. This is the motivation behind the creation of a hybrid modeling technique based on first principle models that adapts to include operating reactor data as it becomes available. An Adaptive Non-Parametric Model (ANPM) was developed for adaptive monitoring of small to medium size reactors (SMR) but would be applicable to all designs. Ideally, an adaptive model should have the ability to adapt to new operational conditions while maintaining the ability to differentiate faults from nominal conditions. This has been achieved by focusing on two main abilities. The first ability is to adjust the model to adapt from simulated conditions to actual operating conditions, and the second ability is to adapt to expanded operating conditions. In each case the system will not learn new conditions which represent faulted or degraded operations. The ANPM architecture is used to adapt the model\u27s memory matrix from data from a First Principle Model (FPM) to data from actual system operation. This produces a more accurate model with the capability to adjust to system fluctuations. This newly developed adaptive modeling technique was tested with two pilot applications. The first application was a heat exchanger model that was simulated in both a low and high fidelity method in SIMULINK. The ANPM was applied to the heat exchanger and improved the monitoring performance over a first principle model by increasing the model accuracy from an average MSE of 0.1451 to 0.0028 over the range of operation. The second pilot application was a flow loop built at the University of Tennessee and simulated in SIMULINK. An improvement in monitoring system performance was observed with the accuracy of the model improving from an average MSE of 0.302 to an MSE of 0.013 over the adaptation range of operation. This research focused on the theory, development, and testing of the ANPM and the corresponding elements in the surveillance system

    An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

    Get PDF
    To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable grab bag of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours

    Prognostic-based Life Extension Methodology with Application to Power Generation Systems

    Get PDF
    Practicable life extension of engineering systems would be a remarkable application of prognostics. This research proposes a framework for prognostic-base life extension. This research investigates the use of prognostic data to mobilize the potential residual life. The obstacles in performing life extension include: lack of knowledge, lack of tools, lack of data, and lack of time. This research primarily considers using the acoustic emission (AE) technology for quick-response diagnostic. To be specific, an important feature of AE data was statistically modeled to provide quick, robust and intuitive diagnostic capability. The proposed model was successful to detect the out of control situation when the data of faulty bearing was applied. This research also highlights the importance of self-healing materials. One main component of the proposed life extension framework is the trend analysis module. This module analyzes the pattern of the time-ordered degradation measures. The trend analysis is helpful not only for early fault detection but also to track the improvement in the degradation rate. This research considered trend analysis methods for the prognostic parameters, degradation waveform and multivariate data. In this respect, graphical methods was found appropriate for trend detection of signal features. Hilbert Huang Transform was applied to analyze the trends in waveforms. For multivariate data, it was realized that PCA is able to indicate the trends in the data if accompanied by proper data processing. In addition, two algorithms are introduced to address non-monotonic trends. It seems, both algorithms have the potential to treat the non-monotonicity in degradation data. Although considerable research has been devoted to developing prognostics algorithms, rather less attention has been paid to post-prognostic issues such as maintenance decision making. A multi-objective optimization model is presented for a power generation unit. This model proves the ability of prognostic models to balance between power generation and life extension. In this research, the confronting objective functions were defined as maximizing profit and maximizing service life. The decision variables include the shaft speed and duration of maintenance actions. The results of the optimization models showed clearly that maximizing the service life requires lower shaft speed and longer maintenance time

    Active Data Selection for Sensor Networks with Faults and Changepoints

    Full text link

    CPS Data Streams Analytics based on Machine Learning for Cloud and Fog Computing: A Survey

    Get PDF
    Cloud and Fog computing has emerged as a promising paradigm for the Internet of things (IoT) and cyber-physical systems (CPS). One characteristic of CPS is the reciprocal feedback loops between physical processes and cyber elements (computation, software and networking), which implies that data stream analytics is one of the core components of CPS. The reasons for this are: (i) it extracts the insights and the knowledge from the data streams generated by various sensors and other monitoring components embedded in the physical systems; (ii) it supports informed decision making; (iii) it enables feedback from the physical processes to the cyber counterparts; (iv) it eventually facilitates the integration of cyber and physical systems. There have been many successful applications of data streams analytics, powered by machine learning techniques, to CPS systems. Thus, it is necessary to have a survey on the particularities of the application of machine learning techniques to the CPS domain. In particular, we explore how machine learning methods should be deployed and integrated in cloud and fog architectures for better fulfilment of the requirements, e.g. mission criticality and time criticality, arising in CPS domains. To the best of our knowledge, this paper is the first to systematically study machine learning techniques for CPS data stream analytics from various perspectives, especially from a perspective that leads to the discussion and guidance of how the CPS machine learning methods should be deployed in a cloud and fog architecture

    Algorithms for sensor validation and multisensor fusion

    Get PDF
    Existing techniques for sensor validation and sensor fusion are often based on analytical sensor models. Such models can be arbitrarily complex and consequently Gaussian distributions are often assumed, generally with a detrimental effect on overall system performance. A holistic approach has therefore been adopted in order to develop two novel and complementary approaches to sensor validation and fusion based on empirical data. The first uses the Nadaraya-Watson kernel estimator to provide competitive sensor fusion. The new algorithm is shown to reliably detect and compensate for bias errors, spike errors, hardover faults, drift faults and erratic operation, affecting up to three of the five sensors in the array. The inherent smoothing action of the kernel estimator provides effective noise cancellation and the fused result is more accurate than the single 'best sensor'. A Genetic Algorithm has been used to optimise the Nadaraya-Watson fuser design. The second approach uses analytical redundancy to provide the on-line sensor status output μH∈[0,1], where μH=1 indicates the sensor output is valid and μH=0 when the sensor has failed. This fuzzy measure is derived from change detection parameters based on spectral analysis of the sensor output signal. The validation scheme can reliably detect a wide range of sensor fault conditions. An appropriate context dependent fusion operator can then be used to perform competitive, cooperative or complementary sensor fusion, with a status output from the fuser providing a useful qualitative indication of the status of the sensors used to derive the fused result. The operation of both schemes is illustrated using data obtained from an array of thick film metal oxide pH sensor electrodes. An ideal pH electrode will sense only the activity of hydrogen ions, however the selectivity of the metal oxide device is worse than the conventional glass electrode. The use of sensor fusion can therefore reduce measurement uncertainty by combining readings from multiple pH sensors having complementary responses. The array can be conveniently fabricated by screen printing sensors using different metal oxides onto a single substrate

    Changepoint detection for data intensive settings

    Get PDF
    Detecting a point in a data sequence where the behaviour alters abruptly, otherwise known as a changepoint, has been an active area of interest for decades. More recently, with the advent of the data intensive era, the need for automated and computationally efficient changepoint methods has grown. We here introduce several new techniques for doing this which address many of the issues inherent in detecting changes in a streaming setting. In short, these new methods, which may be viewed as non-trivial extensions of existing classical procedures, are intended to be as useful in as wide a set of situations as possible, while retaining important theoretical guarantees and ease of implementation. The first novel contribution concerns two methods for parallelising existing dynamic programming based approaches to changepoint detection in the single variate setting. We demonstrate that these methods can result in near quadratic computational gains, while retaining important theoretical guarantees. Our next area of focus is the multivariate setting. We introduce two new methods for data intensive scenarios with a fixed, but possibly large, number of dimensions. The first of these is an offline method which detects one change at a time using a new test statistic. We demonstrate that this test statistic has competitive power in a variety of possible settings for a given changepoint, while allowing the method to be versatile across a range of possible modelling assumptions. The other method we introduce for multivariate data is also suitable in the streaming setting. In addition, it is able to relax many standard modelling assumptions. We discuss the empirical properties of the procedure, especially insofar as they relate to a desired false alarm error rate
    corecore