721 research outputs found

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Optimized Data Rate Allocation for Dynamic Sensor Fusion over Resource Constrained Communication Networks

    Full text link
    This paper presents a new method to solve a dynamic sensor fusion problem. We consider a large number of remote sensors which measure a common Gauss-Markov process and encoders that transmit the measurements to a data fusion center through the resource restricted communication network. The proposed approach heuristically minimizes a weighted sum of communication costs subject to a constraint on the state estimation error at the fusion center. The communication costs are quantified as the expected bitrates from the sensors to the fusion center. We show that the problem as formulated is a difference-of-convex program and apply the convex-concave procedure (CCP) to obtain a heuristic solution. We consider a 1D heat transfer model and 2D target tracking by a drone swarm model for numerical studies. Through these simulations, we observe that our proposed approach has a tendency to assign zero data rate to unnecessary sensors indicating that our approach is sparsity promoting, and an effective sensor selection heuristic

    Novel pattern recognition methods for classification and detection in remote sensing and power generation applications

    Get PDF
    Novel pattern recognition methods for classification and detection in remote sensing and power generation application

    Algorithms for sensor validation and multisensor fusion

    Get PDF
    Existing techniques for sensor validation and sensor fusion are often based on analytical sensor models. Such models can be arbitrarily complex and consequently Gaussian distributions are often assumed, generally with a detrimental effect on overall system performance. A holistic approach has therefore been adopted in order to develop two novel and complementary approaches to sensor validation and fusion based on empirical data. The first uses the Nadaraya-Watson kernel estimator to provide competitive sensor fusion. The new algorithm is shown to reliably detect and compensate for bias errors, spike errors, hardover faults, drift faults and erratic operation, affecting up to three of the five sensors in the array. The inherent smoothing action of the kernel estimator provides effective noise cancellation and the fused result is more accurate than the single 'best sensor'. A Genetic Algorithm has been used to optimise the Nadaraya-Watson fuser design. The second approach uses analytical redundancy to provide the on-line sensor status output μH∈[0,1], where μH=1 indicates the sensor output is valid and μH=0 when the sensor has failed. This fuzzy measure is derived from change detection parameters based on spectral analysis of the sensor output signal. The validation scheme can reliably detect a wide range of sensor fault conditions. An appropriate context dependent fusion operator can then be used to perform competitive, cooperative or complementary sensor fusion, with a status output from the fuser providing a useful qualitative indication of the status of the sensors used to derive the fused result. The operation of both schemes is illustrated using data obtained from an array of thick film metal oxide pH sensor electrodes. An ideal pH electrode will sense only the activity of hydrogen ions, however the selectivity of the metal oxide device is worse than the conventional glass electrode. The use of sensor fusion can therefore reduce measurement uncertainty by combining readings from multiple pH sensors having complementary responses. The array can be conveniently fabricated by screen printing sensors using different metal oxides onto a single substrate

    Fault Diagnosis of Rotating Machinery using Improved Entropy Measures

    Get PDF
    Fault diagnosis of rotating machinery is of considerable significance to ensure high reliability and safety in industrial machinery. The key to fault diagnosis consists in detecting potential incipient fault presence, recognizing fault patterns, and identifying degrees of failures in machinery. The process of data-driven fault diagnosis method often requires extracting useful feature representations from measurements to make diagnostic decision-making. Entropy measures, as suitable non-linear complexity indicators, estimate dynamic changes in measurements directly, which are challenging to be quantified by conventional statistical indicators. Compared to single-scale entropy measures, multiple-scale entropy measures have been increasingly applied to time series complexity analysis by quantifying entropy values over a range of temporal scales. However, there exist a number of challenges in traditional multiple-scale entropy measures in analyzing bearing signals for bearing fault detection. Specifically, a large majority of multiple-scale entropy methods neglect high�frequency information in bearing vibration signal analysis. Moreover, the data length of transformed multiple signals is greatly reduced as scale factor increases, which can introduce incoherence and bias in entropy values. Lastly, non-linear and non-stationary behaviors of vibration signals due to interference and noise may reduce the diagnostic performance of traditional entropy methods in bearing health identification, especially in complex industrial settings. This dissertation proposes a novel multiple-scale entropy measure, named Adaptive Multiscale Weighted Permutation Entropy (AMWPE), for extracting fault features associated with complexity change in bearing vibration analysis. A new scale-extraction mechanism - adaptive Fine-to-Coarse (F2C) procedure - is presented to generate multiple-scale time series from the original signal. It has advantages of extracting low- and high-frequency information from measurements and generating improved multiple-scale time series with a hierarchical structure. Numerical evaluation is carried out to study the performance of the AMWPE measure in analyzing the complexity change of synthetic signals. Results demonstrated that the AMWPE algorithm could provide high consistency and stable entropy values in entropy estimation. It also presents high robustness against noise in analyzing noisy bearing signals in comparison with traditional entropy methods. Additionally, a new bearing diagnosis method is put forth, where the AMWPE method is applied for entropy analysis and a multi-class support vector machine classifier is used for identifying bearing fault patterns, respectively. Three experimental case studies are carried out to investigate the effectiveness of the proposed diagnosis method for bearing diagnosis. Comparative studies are presented to compare the diagnostic performance of the proposed entropy method and traditional entropy methods in terms of computational time of entropy estimation, feature representation, and diagnosis accuracy rate. Further, noisy bearing signals with different signal-to-noise ratios are analyzed using various entropy measures to study their robustness against noise in bearing diagnosis. Additionally, the developed adaptive F2C procedure can be extended to a variety of entropy algorithms based on improved single-scale entropy method used in entropy estimation. In the combination of artificial intelligence techniques, the improved entropy algorithms are expected to apply to machine health conditions and intelligent fault diagnosis in complex industrial machinery. Besides, they are suitable to evaluate the complexity and irregularity of other non-stationary signals measured from non-linear systems, such as acoustic emission signals and physiological signals

    An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

    Get PDF
    To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable grab bag of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours

    Convergence of Intelligent Data Acquisition and Advanced Computing Systems

    Get PDF
    This book is a collection of published articles from the Sensors Special Issue on "Convergence of Intelligent Data Acquisition and Advanced Computing Systems". It includes extended versions of the conference contributions from the 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2019), Metz, France, as well as external contributions

    Level based sampling techniques for energy conservation in large scale wireless sensor networks

    Get PDF
    As the size and node density of wireless sensor networks (WSN) increase,the energy conservation problem becomes more critical and the conventional methods become inadequate. This dissertation addresses two different problems in large scale WSNs where all sensors are involved in monitoring,but the traditional practice of periodic transmissions of observations from all sensors would drain excessive amount of energy. In the first problem,monitoring of the spatial distribution of a two dimensional correlated signal is considered using a large scale WSN. It is assumed that sensor observations are heavily affected by noise. We present an approach that is based on detecting contour lines of the signal distribution to estimate the spatial distribution of the signal without involving all sensors in the network. Energy efficient algorithms are proposed for detecting and tracking the temporal variation of the contours. Optimal contour levels that minimize the estimation error and a practical approach for selection of contour levels are explored. Performance of the proposed algorithm is explored with different types of contour levels and detection parameters. In the second problem,a WSN is considered that performs health monitoring of equipment from a power substation. The monitoring applications require transmissions of sensor observations from all sensor nodes on a regular basis to the base station,which is very costly in terms of communication cost. To address this problem,an efficient sampling technique using level-crossings (LCS) is proposed. This technique saves communication cost by suppressing transmissions of data samples that do not convey much information. The performance and cost of LCS for several different level-selection schemes are investigated. The number of required levels and the maximum sampling period for practical implementation of LCS are studied. Finally,in an experimental implementation of LCS with MICAzmote,the performance and cost of LCS for temperature sensing with uniform,logarithmic and a combined version of uniform and logarithmically spaced levels are compared with that using periodic sampling

    A Data-driven Fault Isolation and Identification Scheme for Multiple In-Phase Faults in Satellite Control Moment Gyros

    Get PDF
    A satellite can only complete its mission successfully when all its subsystems, including the attitude control subsystem, are in healthy condition and work properly. Control moment gyroscope is a type of actuator used in the attitude control subsystems of satellites. Any fault in the control moment gyroscope can cause the satellite mission failure if it is not detected, isolated and resolved in-time. Fault isolation provides an opportunity to detect and isolate the occurring faults and, if accompanied by proactive remedial actions, can avoid failure and improve the satellite reliability. It is also necessary to know the fault severity for better maintenance planning and prioritize the corrective actions. This way, the more severe faults can be corrected first. In this work, an enhanced data-driven fault diagnosis scheme is introduced for fault isolation and identification of multiple in-phase faults of satellite control moment gyroscopes that is not addressed in the literature before with high accuracy. The proposed method is based on an optimized support vector machine and an optimized support vector regressor. The results yield fault predictions with up to 95.6% accuracy for isolation and 94.9% accuracy for identification, on average. In addition, a sensitivity analysis with regards to noise, missing values, and missing sensors is done where the results show that the proposed model is robust enough to be used in real applications
    • …
    corecore