261 research outputs found

    Model-based fault detection and diagnosis : cases study for vibration monitoring

    Get PDF
    A signal processing approach is presented for detection and diagnosis of fatigues or failures in vibrating mechanical systems subject to natural excitation . Detection and diagnosis is performed while the system being at work, so that the excitation is usually not observed and may involve turbulent phenomena . This is a short report of a 10 years project which involved more than 2 persons per year in mean . The method is illustrated on the following case studies : offshore structures, and rotating machinery.On présente une approche de traitement du signal pour la détection et le diagnostic des fatigues ou usures dans des systèmes mécaniques soumis à une excitation naturelle ou ambiante . La détection et le diagnostic sont réalisés sur le système en fonctionnement habituel, et donc en général avec une excitation non mesurée et présentant des phénomènes de turbulence . Cet article est un bref rapport sur un projet de recherche d'environ 10 ans qui a mobilisé plus de 2 personnes par an en moyenne . La méthode est illustrée sur les deux cas suivants : structures offshore et turbo-alternateurs (') . (') Ce travail a été soutenu pendant 7 ans par 4 contrats avec IFREMER et pendant 4 ans par 2 contrats avec EDF

    Improving adaptive bagging methods for evolving data streams

    Get PDF
    We propose two new improvements for bagging methods on evolving data streams. Recently, two new variants of Bagging were proposed: ADWIN Bagging and Adaptive-Size Hoeffding Tree (ASHT) Bagging. ASHT Bagging uses trees of different sizes, and ADWIN Bagging uses ADWIN as a change detector to decide when to discard underperforming ensemble members. We improve ADWIN Bagging using Hoeffding Adaptive Trees, trees that can adaptively learn from data streams that change over time. To speed up the time for adapting to change of Adaptive-Size Hoeffding Tree (ASHT) Bagging, we add an error change detector for each classifier. We test our improvements by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples

    A method of detecting radio transients

    Full text link
    Radio transients are sporadic signals and their detection requires that the backends of radio telescopes be equipped with the appropriate hardware and software to undertake this. Observational programs to detect transients can be dedicated or they can piggy-back on observations made by other programs. It is the single-dish single-transient (non-periodical) mode which is considered in this paper. Because neither the width of a transient nor the time of its arrival is known, a sequential analysis in the form of a cumulative sum (cusum) algorithm is proposed here. Computer simulations and real observation data processing are included to demonstrate the performance of the cusum. The use of the Hough transform is here proposed for the purpose of non-coherent de-dispersion. It is possible that the detected transients could be radio frequency interferences (RFI) and a procedure is proposed here which can distinguish between celestial signals and man-made RFI. This procedure is based on an analysis of the statistical properties of the signals

    The asymptotic local approach to change detection and model validation

    Full text link

    Automatic threshold determination for a local approach of change detection in long-term signal recordings

    Get PDF
    CUSUM (cumulative sum) is a well-known method that can be used to detect changes in a signal when the parameters of this signal are known. This paper presents an adaptation of the CUSUM-based change detection algorithms to long-term signal recordings where the various hypotheses contained in the signal are unknown. The starting point of the work was the dynamic cumulative sum (DCS) algorithm, previously developed for application to long-term electromyography (EMG) recordings. DCS has been improved in two ways. The first was a new procedure to estimate the distribution parameters to ensure the respect of the detectability property. The second was the definition of two separate, automatically determined thresholds. One of them (lower threshold) acted to stop the estimation process, the other one (upper threshold) was applied to the detection function. The automatic determination of the thresholds was based on the Kullback-Leibler distance which gives information about the distance between the detected segments (events). Tests on simulated data demonstrated the efficiency of these improvements of the DCS algorithm

    The Bregman chord divergence

    Full text link
    Distances are fundamental primitives whose choice significantly impacts the performances of algorithms in machine learning and signal processing. However selecting the most appropriate distance for a given task is an endeavor. Instead of testing one by one the entries of an ever-expanding dictionary of {\em ad hoc} distances, one rather prefers to consider parametric classes of distances that are exhaustively characterized by axioms derived from first principles. Bregman divergences are such a class. However fine-tuning a Bregman divergence is delicate since it requires to smoothly adjust a functional generator. In this work, we propose an extension of Bregman divergences called the Bregman chord divergences. This new class of distances does not require gradient calculations, uses two scalar parameters that can be easily tailored in applications, and generalizes asymptotically Bregman divergences.Comment: 10 page

    Fast Likelihood-Based Change Point Detection

    Get PDF
    Change point detection plays a fundamental role in many real-world applications, where the goal is to analyze and monitor the behaviour of a data stream. In this paper, we study change detection in binary streams. To this end, we use a likelihood ratio between two models as a measure for indicating change. The first model is a single bernoulli variable while the second model divides the stored data in two segments, and models each segment with its own bernoulli variable. Finding the optimal split can be done in O(n) time, where n is the number of entries since the last change point. This is too expensive for large n. To combat this we propose an approximation scheme that yields (1 - epsilon) approximation in O(epsilon(-1) log(2) n) time. The speed-up consists of several steps: First we reduce the number of possible candidates by adopting a known result from segmentation problems. We then show that for fixed bernoulli parameters we can find the optimal change point in logarithmic time. Finally, we show how to construct a candidate list of size O(epsilon(-1) log n) formodel parameters. We demonstrate empirically the approximation quality and the running time of our algorithm, showing that we can gain a significant speed-up with a minimal average loss in optimality.Peer reviewe

    Multiscale entropy-based analyses of soil transect data

    Get PDF
    A deeper understanding of the spatial variability of soil properties and the relationships between them is needed to scale up measured soil properties and to model soil processes. The object of this study was to describe the spatial scaling properties of a set of soil physical properties measured on a common 1024-m transect across arable fields at Silsoe in Bedfordshire, east-central England. Properties studied were volumetric water content ({theta}), total porosity ({pi}), pH, and N2O flux. We applied entropy as a means of quantifying the scaling behavior of each transect. Finally, we examined the spatial intrascaling behavior of the correlations between {theta} and the other soil variables. Relative entropies and increments in relative entropy calculated for {theta}, {pi}, and pH showed maximum structure at the 128-m scale, while N2O flux presented a more complex scale dependency at large and small scales. The intrascale-dependent correlation between {theta} and {pi} was negative at small scales up to 8 m. The rest of the intrascale-dependent correlation functions between {theta} with N2O fluxes and pH were in agreement with previous studies. These techniques allow research on scale effects localized in scale and provide the information that is complementary to the information about scale dependencies found across a range of scale

    A comparison of linear approaches to filter out environmental effects in structural health monitoring

    Get PDF
    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases
    corecore