1,145 research outputs found

    Feature extraction for speech and music discrimination

    Get PDF
    Driven by the demand of information retrieval, video editing and human-computer interface, in this paper we propose a novel spectral feature for music and speech discrimination. This scheme attempts to simulate a biological model using the averaged cepstrum, where human perception tends to pick up the areas of large cepstral changes. The cepstrum data that is away from the mean value will be exponentially reduced in magnitude. We conduct experiments of music/speech discrimination by comparing the performance of the proposed feature with that of previously proposed features in classification. The dynamic time warping based classification verifies that the proposed feature has the best quality of music/speech classification in the test database

    Joint Multi-Pitch Detection Using Harmonic Envelope Estimation for Polyphonic Music Transcription

    Get PDF
    In this paper, a method for automatic transcription of music signals based on joint multiple-F0 estimation is proposed. As a time-frequency representation, the constant-Q resonator time-frequency image is employed, while a novel noise suppression technique based on pink noise assumption is applied in a preprocessing step. In the multiple-F0 estimation stage, the optimal tuning and inharmonicity parameters are computed and a salience function is proposed in order to select pitch candidates. For each pitch candidate combination, an overlapping partial treatment procedure is used, which is based on a novel spectral envelope estimation procedure for the log-frequency domain, in order to compute the harmonic envelope of candidate pitches. In order to select the optimal pitch combination for each time frame, a score function is proposed which combines spectral and temporal characteristics of the candidate pitches and also aims to suppress harmonic errors. For postprocessing, hidden Markov models (HMMs) and conditional random fields (CRFs) trained on MIDI data are employed, in order to boost transcription accuracy. The system was trained on isolated piano sounds from the MAPS database and was tested on classic and jazz recordings from the RWC database, as well as on recordings from a Disklavier piano. A comparison with several state-of-the-art systems is provided using a variety of error metrics, where encouraging results are indicated

    A synthesis-based method for pitch extraction

    Get PDF
    A synthesis-based method for pitch extraction of the speech signal is proposed. The method synthesizes a number of log power spectra for different values of fundamental frequency and compares them with the log power spectrum of the input speech segment. The average magnitude (AM) difference between the two spectra is used for comparison. The value of fundamental frequency that gives the minimum AM difference between the synthesized spectrum and the input spectrum is chosen as the estimated value of fundamental frequency. The voiced/unvoiced decision is made on the basis of the value of the AM difference at the minimum. For synthesizing the log power spectrum, the speech signal is assumed to be the output of an all-pole filter. The transfer function of the all-pole filter is estimated from the input speech segment by using the autocorrelation method of linear prediction. The synthesis-based method is tried out on real speech data and the results are discussed

    A Model for Pitch Estimation Using Wavelet Packet Transform Based CEPSTRUM Method

    Get PDF
    A computationally efficient model for pitch estimation of mixed audio signals is presented. Pitch estimation plays a significant role in music audition like music information retrieval, automatic music transcription, melody extraction etc. The proposed system consists of channel separation and periodicity detection. The input signal is created by mixing two sound signals. First removes the short time correlations of the mixed signal. The model divides the signal into number of channels using wavelet packet transform. Computes the cepstrum of each channels and sums the cepstrum functions. The summary cepstrum function is further processed to extract the pitch frequency of two input signal separately. The model performance is demonstrated to be comparable to those of recent multichannel models. The proposed system can be verified by simulating the system in MATLAB

    GLOTTAL EXCITATION EXTRACTION OF VOICED SPEECH - JOINTLY PARAMETRIC AND NONPARAMETRIC APPROACHES

    Get PDF
    The goal of this dissertation is to develop methods to recover glottal flow pulses, which contain biometrical information about the speaker. The excitation information estimated from an observed speech utterance is modeled as the source of an inverse problem. Windowed linear prediction analysis and inverse filtering are first used to deconvolve the speech signal to obtain a rough estimate of glottal flow pulses. Linear prediction and its inverse filtering can largely eliminate the vocal-tract response which is usually modeled as infinite impulse response filter. Some remaining vocal-tract components that reside in the estimate after inverse filtering are next removed by maximum-phase and minimum-phase decomposition which is implemented by applying the complex cepstrum to the initial estimate of the glottal pulses. The additive and residual errors from inverse filtering can be suppressed by higher-order statistics which is the method used to calculate cepstrum representations. Some features directly provided by the glottal source\u27s cepstrum representation as well as fitting parameters for estimated pulses are used to form feature patterns that were applied to a minimum-distance classifier to realize a speaker identification system with very limited subjects

    Extraction of Fundamental Frequency of Human Voice by Autocorrelation Technique

    Get PDF
    Human beings are blessed with a wonderful tool – voice or speaking ability that assist to communicate, talk, sing, and express emotions. Like a fingerprint, one’s voice is unique and distinct, and it is different from all others.  It can act as an identifier. The human voice has many components created through a myriad of muscle movements. It is specifically a part of human sound production through the vocal folds (vocal cords) as the primary sound source. It is composed of a multitude of different characteristics, making each voice different; namely, pitch, tone, and rate. [1]. Vocal folds structure and size varies from person to person making the voice as unique nature. Genetics, gender as well as the size and shape of the rest of that person's body, especially the vocal tract, and the manner in which the speech sounds are habitually formed and articulated are the governing factors to describe its uniqueness. Voice characteristics can be correlated with various electrical parameters like intensity, pitch, short time spectrum, format frequencies and bandwidth, spectral correlation etc. This paper is intended for discussion and analysis the   methods of extraction of characteristics of human voice, especially the pitch frequency. Keywords: Vocal folds, Phonation, Kymographic parameters, Pitch, Short time spectrum, Average Magnitude Difference Function
    • …
    corecore