8 research outputs found

    A Novel Approach for Ridge Detection and Mode Retrieval of Multicomponent Signals Based on STFT

    Full text link
    Time-frequency analysis is often used to study non stationary multicomponent signals, which can be viewed as the surperimposition of modes, associated with ridges in the TF plane. To understand such signals, it is essential to identify their constituent modes. This is often done by performing ridge detection in the time-frequency plane which is then followed by mode retrieval. Unfortunately, existing ridge detectors are often not enough robust to noise therefore hampering mode retrieval. In this paper, we therefore develop a novel approach to ridge detection and mode retrieval based on the analysis of the short-time Fourier transform of multicomponent signals in the presence of noise, which will prove to be much more robust than state-of-the-art methods based on the same time-frequency representation

    An adaptive synchroextracting transform for the analysis of noise contaminated multi-component non-stationary signals

    Full text link
    The Synchro-Extracting Transform technique (SET) can capture the changing dynamic in a non-stationary signal which can be applied for fault diagnosis of rotating machinery operating under vary-ing speed or/and load conditions. However, the time frequency representation (TFR) of a signal pro-duced by SET can be affected by noise contained in the signal, which can largely reduce the accuracy of fault diagnosis. This paper addresses this drawback and presents a new extraction operator to im-prove the energy concentration of the TFR of a noise contaminated multi-component signal by using an adaptive ridge curve identification process together with SET. The adaptive ridge curve extraction is deployed to extract the signal components of a multi-component signal via an iterative approach. The effectiveness of the algorithm is verified using one set of simulated noise-added signals and two sets of experimental bearing and gearbox defect signals. The result shows that the proposed technique can accurately identify the fault components from noise contaminated multi-component non-stationary machine defect signals

    Agreement among human and annotated transcriptions of global songs

    Get PDF
    Cross-cultural musical analysis requires standardized symbolic representation of sounds such as score notation. However, transcription into notation is usually conducted manually by ear, which is time-consuming and subjective. Our aim is to evaluate the reliability of existing methods for transcribing songs from diverse societies. We had 3 experts independently transcribe a sample of 32 excerpts of traditional monophonic songs from around the world (half a cappella, half with instrumental accompaniment). 16 songs also had pre-existing transcriptions created by 3 different experts. We compared these human transcriptions against one another and against 10 automatic music transcription algorithms. We found that human transcriptions can be sufficiently reliable (~90% agreement, κ ~.7), but current automated methods are not (<60% agreement, κ <.4). No automated method clearly outperformed others, in contrast to our predictions. These results suggest that improving automated methods for cross-cultural music transcription is critical for diversifying MIR

    Extraction of instantaneous frequencies for signals with intersecting and intermittent trajectories

    Get PDF
    A multicomponent signal usually presents multiple trajectories with time-varying frequencies and amplitudes in a time–frequency distribution (TFD). One can extract the ridges corresponding to true signal components and then reconstruct them to recover signal signatures. Most current practices for ridge extraction assume that each trajectory runs throughout the entire time axis without cross-terms. However, this hypothesis is inconsistent with the truth of many measured signals. The increasing application occasions require further consideration of complicated intersecting and intermittent cases. This study addresses this issue and proposes a novel intersecting and intermittent trajectory tracking (IITT) approach. We first develop a data-driven method to effectively isolate peaks from noises in a TFD and generate a dependable peak spectrum. Then, we propose a dynamic optimization tracking function to decide upon the acceptance of the peaks corresponding to an individual component based on the purified spectrum. The IITT approach fully exploits the information from the raw signal without any prior knowledge while promising robustness to the variations of ridge numbers, ridges’ births and deaths, and its continuation and discontinuation. Two simulated and three measured signals are utilized to assess the performance of the proposed IITT. The success elements of the IITT are revealed and discussed in detail at the end of the paper

    A time-frequency based method for the detection and tracking of multiple non-linearly modulated components with births and deaths

    No full text
    International audienceThe estimation of the components which contain the characteristics of a signal attracts great attention in many real world applications. In this paper, we address the problem of the tracking of multiple signal components over discrete time series. We propose an algorithm to first detect the components from a given time-frequency distribution and then to track them automatically. In the first place, the peaks corresponding to the signal components are detected using the statistical properties of the spectral estimator. Then, an original classifier is proposed to automatically track the detected peaks in order to build components over time. This classifier is based on a total divergence matrix computed from a peak-component divergence matrix that takes account of both amplitude and frequency information. The peak-component pairs are matched automatically from this divergence matrix. We propose a stochastic discrimination rule to decide upon the acceptance of the peak-component pairs. In this way, the algorithm can estimate the number, the amplitude and frequency modulation functions, and the births and the deaths of the components without any limitation on the number of components. The performance of the proposed method, a post-processing of a time-frequency distribution is validated on simulated signals under different parameter sets. The method is also applied to 4 real-world signals as a proof of its applicability. Index Terms—Time-frequency domain, multicomponent, peak detection, component tracking, amplitude and frequency modulation , nonlinear, nonstationary, births and death

    Globally, songs and instrumental melodies are slower, higher, and use more stable pitches than speech: a registered report

    Get PDF
    Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a “musi-linguistic” continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech

    Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report

    Get PDF
    Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a “musi-linguistic” continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech
    corecore