101 research outputs found

    Big Data management: A Vibration Monitoring point of view

    Get PDF
    Vibration Monitoring is a particular kind of Condition Monitoring meant to infer the state of health of a machine from accelerometric measurements. From a practical point of view, the scope is then to extract from the acceleration data some valuable diagnostic information which could be used to detect the presence of possible damages (i.e., to produce knowledge about the state of health). When the monitoring is implemented online, in a continuous way, the raw accelerometric data sets can be very large and complex to be dealt with, as usually involve multiple channels (i.e., multiple locations and directions) and high sample rates (i.e., order of ksps - 103 samples per second), but the final knowledge about the state of health can, in principle, be summarized by a single binary information (i.e., healthy – 0 vs damaged – 1). This is commonly called Damage Detection. In this work, the big data management challenge is tackled from the point of view of statistical signal processing, so as to aggregate the multivariate data and condense them into single information of distance with respect to a healthy reference condition (i.e., the Novelty). When confounding influences (such as the work condition or the environmental condition) can be disregarded, the novelty information has a direct correspondence to the health information, so that an alarm indicating the detection of damage can be triggered upon exceeding a selected threshold for the limit novelty. Many different ways of solving such a binary classification problem can be found in the literature. Starting from the simplest, some of the more effective are compared in the present analysis, to finally select a reliable procedure for the big data management in vibration monitoring

    The 2nd Conference of PhD Students in Computer Science

    Get PDF

    Acta Cybernetica : Volume 15. Number 2.

    Get PDF

    Interpolation-restart strategies for resilient eigensolvers

    Get PDF
    International audienceThe solution of large eigenproblems is involved in many scientific and engineering applications when for instance, stability analysis is a concern. For large simulation in material physics or thermo-acoustics, the calculation can last for many hours on large parallel platforms. On future large-scale systems, the mean time between failures (MTBF) of the system is expected to decrease so that many faults could occur during the solution of large eigenproblems. Consequently, it becomes critical to design parallel eigensolvers that can survive faults. In that framework, we investigate the relevance of approaches relying on numerical techniques, which might be combined with more classical techniques for real large-scale parallel implementations. Because we focus on numerical remedies we do not consider parallel implementations nor parallel experiments but only numerical experiments. We assume that a separate mechanism ensures the fault detection and that a system layer provides support for setting back the environment (processes,. . .) in a running state. Once the system is in a running state, after a fault, our main objective is to provide robust resilient schemes so that the eigensolver may keep converging in the presence of the fault without restarting the calculation from scratch. For this purpose, we extend the interpolation-restart (IR) strategies initially introduced for the solution of linear systems in a previous work to the solution of eigenproblems in this paper. For a given numerical scheme, the IR strategies consist of extracting relevant spectral information from available data after a fault. After data extraction, a well-selected part of the missing data is regenerated through interpolation strategies to constitute a meaningful input to restart the numerical algorithm. One of the main features of this numerical remedy is that it does not require extra resources, i.e., computational unit or computing time, when no fault occurs. In this paper, we revisit a few state-of-the-art methods for solving large sparse eigenvalue problems namely the Arnoldi methods, subspace iteration methods and the Jacobi-Davidson method, in the light of our IR strategies. For each considered eigensolver, we adapt the IR strategies to regenerate as much spectral information as possible. Through extensive numerical experiments, we study the respective robustness of the resulting resilient schemes with respect to the MTBF and to the amount of data loss via qualitative and quantitative illustrations. 1. Introduction. The computation of eigenpairs (eigenvalues and eigenvectors) of large sparse matrices is involved in many scientific and engineering applications such as when stability analysis is a concern. To name a few, it appears in structural dynamics, thermodynamics, thermo-acoustics, quantum chemistry. With the permanent increase of the computational power of high performance computing (HPC) systems by using a larger and larger number of CPU cores or specialized processing units, HPC applications are increasingly prone to faults. To guarantee fault tolerance, two classes of strategies are required. One for the fault detection and the other for fault correction. Faults such as computational node crashes are obvious to detect while silent faults may be challenging to detect. To cope with silent faults, a duplication strategy is commonly used for fault detection [18, 39] by comparing the outputs, while triple modular redundancy (TMR) is used for fault detection and correction [34, 37]. However, the additional computational resources required by such replication strategies may represent a severe penalty. Instead of replicating computational resources, studies [7, 36] propose a time redundancy model for fault detection. It consists in repeating computation twice on the same resource. The advantage of time redundancy models is the flexibility at application level; software developers can indeed select only a set of critical instructions to protect. Recomputing only some instructions instead of the whole application lowers the time redundancy overhead [25]. In some numerical simulations, data naturally satisfy well defined mathematical properties. These properties can be efficiently exploited for fault detection through a periodical check of the numerical properties during computation [10]. Checkpoint/restart is the most studied fault recovery strategy in the context of HPC systems. The common checkpoint/restart scheme consists in periodically saving data onto a reliable storage device such as a remote disk. When a fault occurs, a rollback is performed to the point of the most recent and consistent checkpoint. According to the implemented checkpoint strategy, all processe

    Interpolation-restart strategies for resilient eigensolvers

    Get PDF
    The solution of large eigenproblems is involved in many scientific and engineering applications when, for instance stability analysis is a concern. For large simulation in material physics or thermo-acoustics, the calculation can last for many hours on large parallel platforms. On future large-scale systems, the time interval between two consecutive faults is forecast to decrease so that many faults could occur during the solution of large eigenproblems. Consequently it becomes critical to design parallel eigensolvers which can survive faults. In that framework, we mainly investigate the relevance of approaches relying on numerical techniques that might be combined with more classical techniques for real large scale parallel implementations. Because we focus on numerical remedies we do not consider parallel implementations nor parallel experiments but only numerical experiments.We assume that a separate mechanism ensures the fault detection and that a system layerprovides support for setting back the environment (processes, \ldots) in a running state.Once the system is in a running state, after a fault, our main objective is to provide robust resilient schemes so that the eigensolver may keep converging through the fault without restarting the calculation from scratch. For this purpose, we extend the interpolation-restart (IR) strategies introduced in a previous work for linear systems. For a given numerical scheme, the IR strategies consist in extracting relevant spectral information from available data after a fault. After data extraction, a well selected part of the missing data is regenerated through interpolation strategies to constitute meaningful input to restart the numerical algorithm. A main feature of this numerical remedy that it does not require extra resources, e.i., computational unit or computing time, when no fault occurs.In this paper, we revisit a few state-of-the-art methods for solving large sparse eigenvalue problems namely the Arnoldi methods, subspace iteration methods and the Jacobi-Davidson method, in the light of our IR strategies. For each considered eigensolver, we adapt the IR strategies to regenerate as much spectral information as possible.Through intensive numerical experiments, we illustrate the qualitative behavior of the resulting schemes when the number of faults and the amount of lost data are varied

    Reducing the Mast Vibration of Single-Mast Stacker Cranes by Gain-Scheduled Control

    Get PDF
    In the frame structure of stacker cranes harmful mast vibrations may appear due to the inertial forces of acceleration or the braking movement phase. This effect may reduce the stability and positioning accuracy of these machines. Unfortunately, their dynamic properties also vary with the lifted load magnitude and position. The purpose of the paper is to present a controller design method which can handle the effect of a varying lifted load magnitude and position in a dynamic model and at the same time reveals good reference signal tracking and mast vibration reducing properties. A controller design case study is presented step by step from dynamic modeling through to the validation of the resulting controller. In the paper the dynamic modeling possibilities of single-mast stacker cranes are summarized. The handling of varying dynamical behavior is realized via the polytopic LPV modeling approach. Based on this modeling technique, a gain-scheduled controller design method is proposed, which is suitable for achieving the goals set. Finally, controller validation is presented by means of time domain simulations
    • 

    corecore