13,325 research outputs found

    Data-driven Soft Sensors in the Process Industry

    Get PDF
    In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures

    Get PDF
    This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverable’s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes

    Analysis of A Nonsmooth Optimization Approach to Robust Estimation

    Full text link
    In this paper, we consider the problem of identifying a linear map from measurements which are subject to intermittent and arbitarily large errors. This is a fundamental problem in many estimation-related applications such as fault detection, state estimation in lossy networks, hybrid system identification, robust estimation, etc. The problem is hard because it exhibits some intrinsic combinatorial features. Therefore, obtaining an effective solution necessitates relaxations that are both solvable at a reasonable cost and effective in the sense that they can return the true parameter vector. The current paper discusses a nonsmooth convex optimization approach and provides a new analysis of its behavior. In particular, it is shown that under appropriate conditions on the data, an exact estimate can be recovered from data corrupted by a large (even infinite) number of gross errors.Comment: 17 pages, 9 figure

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Performance of the LHCb vertex locator

    Get PDF
    The Vertex Locator (VELO) is a silicon microstrip detector that surrounds the proton-proton interaction region in the LHCb experiment. The performance of the detector during the first years of its physics operation is reviewed. The system is operated in vacuum, uses a bi-phase CO2 cooling system, and the sensors are moved to 7 mm from the LHC beam for physics data taking. The performance and stability of these characteristic features of the detector are described, and details of the material budget are given. The calibration of the timing and the data processing algorithms that are implemented in FPGAs are described. The system performance is fully characterised. The sensors have a signal to noise ratio of approximately 20 and a best hit resolution of 4 ÎŒm is achieved at the optimal track angle. The typical detector occupancy for minimum bias events in standard operating conditions in 2011 is around 0.5%, and the detector has less than 1% of faulty strips. The proximity of the detector to the beam means that the inner regions of the n+-on-n sensors have undergone space-charge sign inversion due to radiation damage. The VELO performance parameters that drive the experiment's physics sensitivity are also given. The track finding efficiency of the VELO is typically above 98% and the modules have been aligned to a precision of 1 ÎŒm for translations in the plane transverse to the beam. A primary vertex resolution of 13 ÎŒm in the transverse plane and 71 ÎŒm along the beam axis is achieved for vertices with 25 tracks. An impact parameter resolution of less than 35 ÎŒm is achieved for particles with transverse momentum greater than 1 GeV/c

    Increasing resilience of ATM networks using traffic monitoring and automated anomaly analysis

    Get PDF
    Systematic network monitoring can be the cornerstone for the dependable operation of safety-critical distributed systems. In this paper, we present our vision for informed anomaly detection through network monitoring and resilience measurements to increase the operators' visibility of ATM communication networks. We raise the question of how to determine the optimal level of automation in this safety-critical context, and we present a novel passive network monitoring system that can reveal network utilisation trends and traffic patterns in diverse timescales. Using network measurements, we derive resilience metrics and visualisations to enhance the operators' knowledge of the network and traffic behaviour, and allow for network planning and provisioning based on informed what-if analysis

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    dynamic modelling of the swash plate of a hydraulic axial piston pump for condition monitoring applications

    Get PDF
    Abstract In the last years Prognostic and Health Management (PHM) has become one of the challenging topic in the engineering field. In particular, model-based approach for diagnostic relies on the development of a mathematical model of the system representing its flawless status. Once the model has been developed and carefully calibrated on experimental data referred to flawless pump condition the comparison between the model output and the real system output leads to the residual analysis, which gives a diagnosis of the component health. This paper presents the mathematical model of a hydraulic axial piston pump developed in order to replicate the dynamic behavior of the swash plate for PHM applications. The model has been developed on the basis of simplified hypotheses, a friction model between swash plate and bearings has been introduced. A detailed experimental activity was carried out to calibrate and validate the model with step tests and sweep tests. The comparison between numerical and experimental results shows a satisfying agreement and highlights the model capability to reproduce the swash plate dynamics. Future works will include tests with the pump in faulty conditions to evaluate the pump health state through the residual analysis of the swash plate position

    Validation Techniques for Sensor Data in Mobile Health Applications

    Get PDF
    Mobile applications have become amust in every user’s smart device, andmany of these applications make use of the device sensors’ to achieve its goal. Nevertheless, it remains fairly unknown to the user to which extent the data the applications use can be relied upon and, therefore, to which extent the output of a given application is trustworthy or not. To help developers and researchers and to provide a common ground of data validation algorithms and techniques, this paper presents a review of the most commonly used data validation algorithms, along with its usage scenarios, and proposes a classification for these algorithms. This paper also discusses the process of achieving statistical significance and trust for the desired output.Portuguese Foundation for Science and Technology UID/EEA/50008/2013COST Action Architectures, Algorithms and Protocols for Enhanced Living Environments (AAPELE) IC130
    • 

    corecore