14,048 research outputs found
Wind turbine condition monitoring strategy through multiway PCA and multivariate inference
This article states a condition monitoring strategy for wind turbines using a statistical data-driven modeling approach by means of supervisory control and data acquisition (SCADA) data. Initially, a baseline data-based model is obtained from the healthy wind turbine by means of multiway principal component analysis (MPCA). Then, when the wind turbine is monitorized, new data is acquired and projected into the baseline MPCA model space. The acquired SCADA data are treated
as a random process given the random nature of the turbulent wind. The objective is to decide if the multivariate distribution that is obtained from the wind turbine to be analyzed (healthy or not) is related to the baseline one. To achieve this goal, a test for the equality of population means is
performed. Finally, the results of the test can determine that the hypothesis is rejected (and the wind turbine is faulty) or that there is no evidence to suggest that the two means are different, so the wind turbine can be considered as healthy. The methodology is evaluated on a wind turbine fault detection benchmark that uses a 5 MW high-fidelity wind turbine model and a set of eight realistic fault scenarios. It is noteworthy that the results, for the presented methodology, show that for a wide
range of significance, a in [1%, 13%], the percentage of correct decisions is kept at 100%; thus it is a promising tool for real-time wind turbine condition monitoring.Peer ReviewedPostprint (published version
Comparison of different classification algorithms for fault detection and fault isolation in complex systems
Due to the lack of sufficient results seen in literature, feature extraction and classification methods of hydraulic systems appears to be somewhat challenging. This paper compares the performance of three classifiers (namely linear support vector machine (SVM), distance-weighted k-nearest neighbor (WKNN), and decision tree (DT) using data from optimized and non-optimized sensor set solutions. The algorithms are trained with known data and then tested with unknown data for different scenarios characterizing faults with different degrees of severity. This investigation is based solely on a data-driven approach and relies on data sets that are taken from experiments on the fuel system. The system that is used throughout this study is a typical fuel delivery system consisting of standard components such as a filter, pump, valve, nozzle, pipes, and two tanks. Running representative tests on a fuel system are problematic because of the time, cost, and reproduction constraints involved in capturing any significant degradation. Simulating significant degradation requires running over a considerable period; this cannot be reproduced quickly and is costly
Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications
The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version
FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods
Data-driven Soft Sensors in the Process Industry
In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work
PreSEIS: A Neural Network-Based Approach to Earthquake Early Warning for Finite Faults
The major challenge in the development of earthquake early warning (EEW) systems is the achievement of a robust performance at largest possible warning time. We have developed a new method for EEW—called PreSEIS (Pre-SEISmic)—that is as quick as methods that are based on single station observations and, at the same time, shows a higher robustness than most other approaches. At regular timesteps after the triggering of the first EEW sensor, PreSEIS estimates the most likely source parameters of an earthquake using the available information on ground motions at different sensors in a seismic network. The approach is based on two-layer feed-forward neural networks to estimate the earthquake hypocenter location, its moment magnitude, and the expansion of the evolving seismic rupture. When applied to the Istanbul Earthquake Rapid Response and Early Warning System (IERREWS), PreSEIS estimates the moment magnitudes of 280 simulated finite faults scenarios (4.5≤M≤7.5) with errors of less than ±0.8 units after 0.5 sec, ±0.5 units after 7.5 sec, and ±0.3 units after 15.0 sec. In the same time intervals, the mean location errors can be reduced from 10 km over 6 km to less than 5 km, respectively. Our analyses show that the uncertainties of the estimated parameters (and thus of the warnings) decrease with time. This reveals a trade-off between the reliability of the warning on the one hand, and the remaining warning time on the other hand. Moreover, the ongoing update of predictions with time allows PreSEIS to handle complex ruptures, in which the largest fault slips do not occur close to the point of rupture initiation. The estimated expansions of the seismic ruptures lead to a clear enhancement of alert maps, which visualize the level and distribution of likely ground shaking in the affected region seconds before seismic waves will arrive
Characterization of Model-Based Detectors for CPS Sensor Faults/Attacks
A vector-valued model-based cumulative sum (CUSUM) procedure is proposed for
identifying faulty/falsified sensor measurements. First, given the system
dynamics, we derive tools for tuning the CUSUM procedure in the fault/attack
free case to fulfill a desired detection performance (in terms of false alarm
rate). We use the widely-used chi-squared fault/attack detection procedure as a
benchmark to compare the performance of the CUSUM. In particular, we
characterize the state degradation that a class of attacks can induce to the
system while enforcing that the detectors (CUSUM and chi-squared) do not raise
alarms. In doing so, we find the upper bound of state degradation that is
possible by an undetected attacker. We quantify the advantage of using a
dynamic detector (CUSUM), which leverages the history of the state, over a
static detector (chi-squared) which uses a single measurement at a time.
Simulations of a chemical reactor with heat exchanger are presented to
illustrate the performance of our tools.Comment: Submitted to IEEE Transactions on Control Systems Technolog
A Framework for Robust Assimilation of Potentially Malign Third-Party Data, and its Statistical Meaning
This paper presents a model-based method for fusing data from multiple
sensors with a hypothesis-test-based component for rejecting potentially faulty
or otherwise malign data. Our framework is based on an extension of the classic
particle filter algorithm for real-time state estimation of uncertain systems
with nonlinear dynamics with partial and noisy observations. This extension,
based on classical statistical theories, utilizes statistical tests against the
system's observation model. We discuss the application of the two major
statistical testing frameworks, Fisherian significance testing and
Neyman-Pearsonian hypothesis testing, to the Monte Carlo and sensor fusion
settings. The Monte Carlo Neyman-Pearson test we develop is useful when one has
a reliable model of faulty data, while the Fisher one is applicable when one
may not have a model of faults, which may occur when dealing with third-party
data, like GNSS data of transportation system users. These statistical tests
can be combined with a particle filter to obtain a Monte Carlo state estimation
scheme that is robust to faulty or outlier data. We present a synthetic freeway
traffic state estimation problem where the filters are able to reject simulated
faulty GNSS measurements. The fault-model-free Fisher filter, while
underperforming the Neyman-Pearson one when the latter has an accurate fault
model, outperforms it when the assumed fault model is incorrect.Comment: IEEE Intelligent Transportation Systems Magazine, special issue on
GNSS-based positionin
- …