12 research outputs found

    A Framework and Classification for Fault Detection Approaches in Wireless Sensor Networks with an Energy Efficiency Perspective

    Get PDF
    Wireless Sensor Networks (WSNs) are more and more considered a key enabling technology for the realisation of the Internet of Things (IoT) vision. With the long term goal of designing fault-tolerant IoT systems, this paper proposes a fault detection framework for WSNs with the perspective of energy efficiency to facilitate the design of fault detection methods and the evaluation of their energy efficiency. Following the same design principle of the fault detection framework, the paper proposes a classification for fault detection approaches. The classification is applied to a number of fault detection approaches for the comparison of several characteristics, namely, energy efficiency, correlation model, evaluation method, and detection accuracy. The design guidelines given in this paper aim at providing an insight into better design of energy-efficient detection approaches in resource-constraint WSNs

    Distributed Power-Line Outage Detection Based on Wide Area Measurement System

    Get PDF
    In modern power grids, the fast and reliable detection of power-line outages is an important functionality, which prevents cascading failures and facilitates an accurate state estimation to monitor the real-time conditions of the grids. However, most of the existing approaches for outage detection suffer from two drawbacks, namely: (i) high computational complexity; and (ii) relying on a centralized means of implementation. The high computational complexity limits the practical usage of outage detection only for the case of single-line or double-line outages. Meanwhile, the centralized means of implementation raises security and privacy issues. Considering these drawbacks, the present paper proposes a distributed framework, which carries out in-network information processing and only shares estimates on boundaries with the neighboring control areas. This novel framework relies on a convex-relaxed formulation of the line outage detection problem and leverages the alternating direction method of multipliers (ADMM) for its distributed solution. The proposed framework invokes a low computational complexity, requiring only linear and simple matrix-vector operations. We also extend this framework to incorporate the sparse property of the measurement matrix and employ the LSQRalgorithm to enable a warm start, which further accelerates the algorithm. Analysis and simulation tests validate the correctness and effectiveness of the proposed approaches

    Energy-efficient information inference in wireless sensor networks based on graphical modeling

    Get PDF
    This dissertation proposes a systematic approach, based on a probabilistic graphical model, to infer missing observations in wireless sensor networks (WSNs) for sustaining environmental monitoring. This enables us to effectively address two critical challenges in WSNs: (1) energy-efficient data gathering through planned communication disruptions resulting from energy-saving sleep cycles, and (2) sensor-node failure tolerance in harsh environments. In our approach, we develop a pairwise Markov Random Field (MRF) to model the spatial correlations in a sensor network. Our MRF model is first constructed through automatic learning from historical sensed data, by using Iterative Proportional Fitting (IPF). When the MRF model is constructed, Loopy Belief Propagation (LBP) is then employed to perform information inference to estimate the missing data given incomplete network observations. The proposed approach is then improved in terms of energy-efficiency and robustness from three aspects: model building, inference and parameter learning. The model and methods are empirically evaluated using multiple real-world sensor network data sets. The results demonstrate the merits of our proposed approaches

    In-situ Data Analytics In Cyber-Physical Systems

    Get PDF
    Cyber-Physical System (CPS) is an engineered system in which sensing, networking, and computing are tightly coupled with the control of the physical entities. To enable security, scalability and resiliency, new data analytics methodologies are required for computing, monitoring and optimization in CPS. This work investigates the data analytics related challenges in CPS through two study cases: Smart Grid and Seismic Imaging System. For smart grid, this work provides a complete solution for system management based on novel in-situ data analytics designs. We first propose methodologies for two important tasks of power system monitoring: grid topology change and power-line outage detection. To address the issue of low measurement redundancy in topology identification, particularly in the low-level distribution network, we develop a maximum a posterior based mechanism, which is capable of embedding prior information on the breakers status to enhance the identification accuracy. In power-line outage detection, existing approaches suer from high computational complexity and security issues raised from centralized implementation. Instead, this work presents a distributed data analytics framework, which carries out in-network processing and invokes low computational complexity, requiring only simple matrix-vector multiplications. To complete the system functionality, we also propose a new power grid restoration strategy involving data analytics for topology reconfiguration and resource planning after faults or changes. In seismic imaging system, we develop several innovative in-situ seismic imaging schemes in which each sensor node computes the tomography based on its partial information and through gossip with local neighbors. The seismic data are generated in a distributed fashion originally. Dierent from the conventional approach involving data collection and then processing in order, our proposed in-situ data computing methodology is much more ecient. The underlying mechanisms avoid the bottleneck problem on bandwidth since all the data are processed distributed in nature and only limited decisional information is communicated. Furthermore, the proposed algorithms can deliver quicker insights than the state-of-arts in seismic imaging. Hence they are more promising solutions for real-time in-situ data analytics, which is highly demanded in disaster monitoring related applications. Through extensive experiments, we demonstrate that the proposed data computing methods are able to achieve near-optimal high quality seismic tomography, retain low communication cost, and provide real-time seismic data analytics

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic

    Computational Intelligence in Healthcare

    Get PDF
    The number of patient health data has been estimated to have reached 2314 exabytes by 2020. Traditional data analysis techniques are unsuitable to extract useful information from such a vast quantity of data. Thus, intelligent data analysis methods combining human expertise and computational models for accurate and in-depth data analysis are necessary. The technological revolution and medical advances made by combining vast quantities of available data, cloud computing services, and AI-based solutions can provide expert insight and analysis on a mass scale and at a relatively low cost. Computational intelligence (CI) methods, such as fuzzy models, artificial neural networks, evolutionary algorithms, and probabilistic methods, have recently emerged as promising tools for the development and application of intelligent systems in healthcare practice. CI-based systems can learn from data and evolve according to changes in the environments by taking into account the uncertainty characterizing health data, including omics data, clinical data, sensor, and imaging data. The use of CI in healthcare can improve the processing of such data to develop intelligent solutions for prevention, diagnosis, treatment, and follow-up, as well as for the analysis of administrative processes. The present Special Issue on computational intelligence for healthcare is intended to show the potential and the practical impacts of CI techniques in challenging healthcare applications

    Brain Tumor Diagnosis Support System: A decision Fusion Framework

    Get PDF
    An important factor in providing effective and efficient therapy for brain tumors is early and accurate detection, which can increase survival rates. Current image-based tumor detection and diagnosis techniques are heavily dependent on interpretation by neuro-specialists and/or radiologists, making the evaluation process time-consuming and prone to human error and subjectivity. Besides, widespread use of MR spectroscopy requires specialized processing and assessment of the data and obvious and fast show of the results as photos or maps for routine medical interpretative of an exam. Automatic brain tumor detection and classification have the potential to offer greater efficiency and predictions that are more accurate. However, the performance accuracy of automatic detection and classification techniques tends to be dependent on the specific image modality and is well known to vary from technique to technique. For this reason, it would be prudent to examine the variations in the execution of these methods to obtain consistently high levels of achievement accuracy. Designing, implementing, and evaluating categorization software is the goal of the suggested framework for discerning various brain tumor types on magnetic resonance imaging (MRI) using textural features. This thesis introduces a brain tumor detection support system that involves the use of a variety of tumor classifiers. The system is designed as a decision fusion framework that enables these multi-classifier to analyze medical images, such as those obtained from magnetic resonance imaging (MRI). The fusion procedure is ground on the Dempster-Shafer evidence fusion theory. Numerous experimental scenarios have been implemented to validate the efficiency of the proposed framework. Compared with alternative approaches, the outcomes show that the methodology developed in this thesis demonstrates higher accuracy and higher computational efficiency

    Logging Statements Analysis and Automation in Software Systems with Data Mining and Machine Learning Techniques

    Get PDF
    Log files are widely used to record runtime information of software systems, such as the timestamp of an event, the name or ID of the component that generated the log, and parts of the state of a task execution. The rich information of logs enables system developers (and operators) to monitor the runtime behavior of their systems and further track down system problems in development and production settings. With the ever-increasing scale and complexity of modern computing systems, the volume of logs is rapidly growing. For example, eBay reported that the rate of log generation on their servers is in the order of several petabytes per day in 2018 [17]. Therefore, the traditional way of log analysis that largely relies on manual inspection (e.g., searching for error/warning keywords or grep) has become an inefficient, a labor intensive, error-prone, and outdated task. The growth of the logs has initiated the emergence of automated tools and approaches for log mining and analysis. In parallel, the embedding of logging statements in the source code is a manual and error-prone task, and developers often might forget to add a logging statement in the software's source code. To address the logging challenge, many e orts have aimed to automate logging statements in the source code, and in addition, many tools have been proposed to perform large-scale log le analysis by use of machine learning and data mining techniques. However, the current logging process is yet mostly manual, and thus, proper placement and content of logging statements remain as challenges. To overcome these challenges, methods that aim to automate log placement and content prediction, i.e., `where and what to log', are of high interest. In addition, approaches that can automatically mine and extract insight from large-scale logs are also well sought after. Thus, in this research, we focus on predicting the log statements, and for this purpose, we perform an experimental study on open-source Java projects. We introduce a log-aware code-clone detection method to predict the location and description of logging statements. Additionally, we incorporate natural language processing (NLP) and deep learning methods to further enhance the performance of the log statements' description prediction. We also introduce deep learning based approaches for automated analysis of software logs. In particular, we analyze execution logs and extract natural language characteristics of logs to enable the application of natural language models for automated log le analysis. Then, we propose automated tools for analyzing log files and measuring the information gain from logs for different log analysis tasks such as anomaly detection. We then continue our NLP-enabled approach by leveraging the state-of-the-art language models, i.e., Transformers, to perform automated log parsing
    corecore