16,663 research outputs found

    A CUSUM test with sliding reference for ground resonance monitoring

    Get PDF
    Ground resonance is potentially destructive oscillations that may develop on helicopters rotors when the aircraft is on or near the ground. Therefore, this unstable phenomenon has to be detected before it occurs in order to be avoided by the pilot. To predict the zones of instability, works have generally relayed on off-line modal analysis of the helicopter model. Unfortunately, this off-line analysis is not sufficiently reliable. The subspace-based cumulative sum CUSUM test, able of on-line monitoring, is a good alternative which permits - at once- to avoid the system identification for each flight point and to have more robust detection, with reduced costs. In this paper, we describe an alternative test- with a moving reference this time- in order to kill wrong alarms or premature responses that are observed for fixed-reference tests. Numerical results reported herein are driven from simulation data

    A quick search method for audio signals based on a piecewise linear representation of feature trajectories

    Full text link
    This paper presents a new method for a quick similarity-based search through long unlabeled audio streams to detect and locate audio clips provided by users. The method involves feature-dimension reduction based on a piecewise linear representation of a sequential feature trajectory extracted from a long audio stream. Two techniques enable us to obtain a piecewise linear representation: the dynamic segmentation of feature trajectories and the segment-based Karhunen-L\'{o}eve (KL) transform. The proposed search method guarantees the same search results as the search method without the proposed feature-dimension reduction method in principle. Experiment results indicate significant improvements in search speed. For example the proposed method reduced the total search time to approximately 1/12 that of previous methods and detected queries in approximately 0.3 seconds from a 200-hour audio database.Comment: 20 pages, to appear in IEEE Transactions on Audio, Speech and Language Processin
    corecore