3,268 research outputs found
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
A Data Analytics Framework for Smart Grids: Spatio-temporal Wind Power Analysis and Synchrophasor Data Mining
abstract: Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly organized into the following two parts: I) spatio-temporal wind power analysis for wind generation forecast and integration, and II) data mining and information fusion of synchrophasor measurements toward secure power grids. Part I is centered around wind power generation forecast and integration. First, a spatio-temporal analysis approach for short-term wind farm generation forecasting is proposed. Specifically, using extensive measurement data from an actual wind farm, the probability distribution and the level crossing rate of wind farm generation are characterized using tools from graphical learning and time-series analysis. Built on these spatial and temporal characterizations, finite state Markov chain models are developed, and a point forecast of wind farm generation is derived using the Markov chains. Then, multi-timescale scheduling and dispatch with stochastic wind generation and opportunistic demand response is investigated. Part II focuses on incorporating the emerging synchrophasor technology into the security assessment and the post-disturbance fault diagnosis of power systems. First, a data-mining framework is developed for on-line dynamic security assessment by using adaptive ensemble decision tree learning of real-time synchrophasor measurements. Under this framework, novel on-line dynamic security assessment schemes are devised, aiming to handle various factors (including variations of operating conditions, forced system topology change, and loss of critical synchrophasor measurements) that can have significant impact on the performance of conventional data-mining based on-line DSA schemes. Then, in the context of post-disturbance analysis, fault detection and localization of line outage is investigated using a dependency graph approach. It is shown that a dependency graph for voltage phase angles can be built according to the interconnection structure of power system, and line outage events can be detected and localized through networked data fusion of the synchrophasor measurements collected from multiple locations of power grids. Along a more practical avenue, a decentralized networked data fusion scheme is proposed for efficient fault detection and localization.Dissertation/ThesisPh.D. Electrical Engineering 201
Recommended from our members
Distribution System Disturbance Analysis and Outage Management Using Hybrid Data-Driven and Physics-Based Approaches
Securing cyber-power distribution systems (DS) against malicious events is critical with the integration of distributed energy resources (DERs), supporting automation and increasing vulnerabilities. Situational awareness utilizing power data (e.g., data from distribution phasor measurement units (D-PMUs)) and cyber data (e.g., network packets data) is the main focus of this dissertation by means of which, an opportunity for real-time monitoring and decision-making is provided. To further improve the DS’s reliable and resilient operation, in this Ph.D. work, the aim has been put towards the development of an automated tool consisting of multiple modules to precisely investigate any type of data anomalies, followed by root cause finding. For this purpose, a data aggregation scheme is developed to synchronize the resolution and time stamp of multiple metering sources throughout the DS, using an enhancement of the conventional Kalman Filter, named Ensemble Extended Kalman Filter (EEKF). EEKF is implemented as an automated module by exploiting the real-time measurements as well as deriving the system physics. Furthermore, this dissertation develops online cyber-physical event detection and classification as well as proposal of the novel Outage Root Cause Analysis (ORCA) system. Different sections of the work have been tested on IEEE and OPAL-RT test systems as well as real-filed measurements from installed actual hardwares
Diagnosis of Fault Modes Masked by Control Loops with an Application to Autonomous Hovercraft Systems
This paper introduces a methodology for the design, testing and assessment of incipient failure detection techniques for failing components/systems of an autonomous vehicle masked or hidden by feedback control loops. It is recognized that the optimum operation of critical assets (aircraft, autonomous systems, etc.) may be compromised by feedback control loops by masking severe fault modes while compensating for typical disturbances. Detrimental consequences of such occurrences include the inability to detect expeditiously and accurately incipient failures, loss of control and inefficient operation of assets in the form of fuel overconsumption and adverse environmental impact. We pursue a systems engineering process to design, construct and test an autonomous hovercraft instrumented appropriately for improved autonomy. Hidden fault modes are detected with performance guarantees by invoking a Bayesian estimation approach called particle filtering. Simulation and experimental studies are employed to demonstrate the efficacy of the proposed methods
ContextMix: A context-aware data augmentation method for industrial visual inspection systems
While deep neural networks have achieved remarkable performance, data
augmentation has emerged as a crucial strategy to mitigate overfitting and
enhance network performance. These techniques hold particular significance in
industrial manufacturing contexts. Recently, image mixing-based methods have
been introduced, exhibiting improved performance on public benchmark datasets.
However, their application to industrial tasks remains challenging. The
manufacturing environment generates massive amounts of unlabeled data on a
daily basis, with only a few instances of abnormal data occurrences. This leads
to severe data imbalance. Thus, creating well-balanced datasets is not
straightforward due to the high costs associated with labeling. Nonetheless,
this is a crucial step for enhancing productivity. For this reason, we
introduce ContextMix, a method tailored for industrial applications and
benchmark datasets. ContextMix generates novel data by resizing entire images
and integrating them into other images within the batch. This approach enables
our method to learn discriminative features based on varying sizes from resized
images and train informative secondary features for object recognition using
occluded images. With the minimal additional computation cost of image
resizing, ContextMix enhances performance compared to existing augmentation
techniques. We evaluate its effectiveness across classification, detection, and
segmentation tasks using various network architectures on public benchmark
datasets. Our proposed method demonstrates improved results across a range of
robustness tasks. Its efficacy in real industrial environments is particularly
noteworthy, as demonstrated using the passive component dataset.Comment: Accepted to EAA
- …