47 research outputs found

    High-resolution sinusoidal analysis for resolving harmonic collisions in music audio signal processing

    Get PDF
    Many music signals can largely be considered an additive combination of multiple sources, such as musical instruments or voice. If the musical sources are pitched instruments, the spectra they produce are predominantly harmonic, and are thus well suited to an additive sinusoidal model. However, due to resolution limits inherent in time-frequency analyses, when the harmonics of multiple sources occupy equivalent time-frequency regions, their individual properties are additively combined in the time-frequency representation of the mixed signal. Any such time-frequency point in a mixture where multiple harmonics overlap produces a single observation from which the contributions owed to each of the individual harmonics cannot be trivially deduced. These overlaps are referred to as overlapping partials or harmonic collisions. If one wishes to infer some information about individual sources in music mixtures, the information carried in regions where collided harmonics exist becomes unreliable due to interference from other sources. This interference has ramifications in a variety of music signal processing applications such as multiple fundamental frequency estimation, source separation, and instrumentation identification. This thesis addresses harmonic collisions in music signal processing applications. As a solution to the harmonic collision problem, a class of signal subspace-based high-resolution sinusoidal parameter estimators is explored. Specifically, the direct matrix pencil method, or equivalently, the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) method, is used with the goal of producing estimates of the salient parameters of individual harmonics that occupy equivalent time-frequency regions. This estimation method is adapted here to be applicable to time-varying signals such as musical audio. While high-resolution methods have been previously explored in the context of music signal processing, previous work has not addressed whether or not such methods truly produce high-resolution sinusoidal parameter estimates in real-world music audio signals. Therefore, this thesis answers the question of whether high-resolution sinusoidal parameter estimators are really high-resolution for real music signals. This work directly explores the capabilities of this form of sinusoidal parameter estimation to resolve collided harmonics. The capabilities of this analysis method are also explored in the context of music signal processing applications. Potential benefits of high-resolution sinusoidal analysis are examined in experiments involving multiple fundamental frequency estimation and audio source separation. This work shows that there are indeed benefits to high-resolution sinusoidal analysis in music signal processing applications, especially when compared to methods that produce sinusoidal parameter estimates based on more traditional time-frequency representations. The benefits of this form of sinusoidal analysis are made most evident in multiple fundamental frequency estimation applications, where substantial performance gains are seen. High-resolution analysis in the context of computational auditory scene analysis-based source separation shows similar performance to existing comparable methods

    Computational Modelling and Analysis of Vibrato and Portamento in Expressive Music Performance

    Get PDF
    PhD, 148ppVibrato and portamento constitute two expressive devices involving continuous pitch modulation and is widely employed in string, voice, wind music instrument performance. Automatic extraction and analysis of such expressive features form some of the most important aspects of music performance research and represents an under-explored area in music information retrieval. This thesis aims to provide computational and scalable solutions for the automatic extraction and analysis of performed vibratos and portamenti. Applications of the technologies include music learning, musicological analysis, music information retrieval (summarisation, similarity assessment), and music expression synthesis. To automatically detect vibratos and estimate their parameters, we propose a novel method based on the Filter Diagonalisation Method (FDM). The FDM remains robust over short time frames, allowing frame sizes to be set at values small enough to accurately identify local vibrato characteristics and pinpoint vibrato boundaries. For the determining of vibrato presence, we test two alternate decision mechanisms—the Decision Tree and Bayes’ Rule. The FDM systems are compared to state-of-the-art techniques and obtains the best results. The FDM’s vibrato rate accuracies are above 92.5%, and the vibrato extent accuracies are about 85%. We use the Hidden Markov Model (HMM) with Gaussian Mixture Model (GMM) to detect portamento existence. Upon extracting the portamenti, we propose a Logistic Model for describing portamento parameters. The Logistic Model has the lowest root mean squared error and the highest adjusted Rsquared value comparing to regression models employing Polynomial and Gaussian functions, and the Fourier Series. The vibrato and portamento detection and analysis methods are implemented in AVA, an interactive tool for automated detection, analysis, and visualisation of vibrato and portamento. Using the system, we perform crosscultural analyses of vibrato and portamento differences between erhu and violin performance styles, and between typical male or female roles in Beijing opera singing

    A very low latency pitch tracker for audio to midi conversion

    No full text
    International audienceAn algorithm for estimating the fundamental frequency of a single-pitch audio signal is described, for application to audio-to-MIDI conversion. In order to minimize latency, this method is based on the ESPRIT algorithm, together with a statistical model for partials frequencies. It is tested on real guitar recordings and compared to the YIN estimator. We show that, in this particular context, both methods exhibit a similar accuracy but the periodicity measure, used for note segmentation, is much more stable with the ESPRIT-based algorithm. This allows to significantly reduce ghost notes. This method is also able to get very close to the theoretical mini-mum latency, i.e. the fundamental period of the lowest observable pitch. Furthermore, it appears that fast implementations can reach a reasonable complexity and could be compatible with real-time, although this is not tested is this study

    Advances In Internal Model Principle Control Theory

    Get PDF
    In this thesis, two advanced implementations of the internal model principle (IMP) are presented. The first is the identification of exponentially damped sinusoidal (EDS) signals with unknown parameters which are widely used to model audio signals. This application is developed in discrete time as a signal processing problem. An IMP based adaptive algorithm is developed for estimating two EDS parameters, the damping factor and frequency. The stability and convergence of this adaptive algorithm is analyzed based on a discrete time two time scale averaging theory. Simulation results demonstrate the identification performance of the proposed algorithm and verify its stability. The second advanced implementation of the IMP control theory is the rejection of disturbances consisting of both predictable and unpredictable components. An IMP controller is used for rejecting predictable disturbances. But the phase lag introduced by the IMP controller limits the rejection capability of the wideband disturbance controller, which is used for attenuating unpredictable disturbance, such as white noise. A combination of open and closed-loop control strategy is presented. In the closed-loop mode, both controllers are active. Once the tracking error is insignificant, the input to the IMP controller is disconnected while its output control action is maintained. In the open loop mode, the wideband disturbance controller is made more aggressive for attenuating white noise. Depending on the level of the tracking error, the input to the IMP controller is connected intermittently. Thus the system switches between open and closed-loop modes. A state feedback controller is designed as the wideband disturbance controller in this application. Two types of predictable disturbances are considered, constant and periodic. For a constant disturbance, an integral controller, the simplest IMP controller, is used. For a periodic disturbance with unknown frequencies, adaptive IMP controllers are used to estimate the frequencies before cancelling the disturbances. An extended multiple Lyapunov functions (MLF) theorem is developed for the stability analysis of this intermittent control strategy. Simulation results justify the optimal rejection performance of this switched control by comparing with two other traditional controllers

    Physically Informed Subtraction of a String's Resonances from Monophonic, Discretely Attacked Tones : a Phase Vocoder Approach

    Get PDF
    A method for the subtraction of a string's oscillations from monophonic, plucked- or hit-string tones is presented. The remainder of the subtraction is the response of the instrument's body to the excitation, and potentially other sources, such as faint vibrations of other strings, background noises or recording artifacts. In some respects, this method is similar to a stochastic-deterministic decomposition based on Sinusoidal Modeling Synthesis [MQ86, IS87]. However, our method targets string partials expressly, according to a physical model of the string's vibrations described in this thesis. Also, the method sits on a Phase Vocoder scheme. This approach has the essential advantage that the subtraction of the partials can take place \instantly", on a frame-by-frame basis, avoiding the necessity of tracking the partials and therefore availing of the possibility of a real-time implementation. The subtraction takes place in the frequency domain, and a method is presented whereby the computational cost of this process can be reduced through the reduction of a partial's frequency-domain data to its main lobe. In each frame of the Phase Vocoder, the string is encoded as a set of partials, completely described by four constants of frequency, phase, magnitude and exponential decay. These parameters are obtained with a novel method, the Complex Exponential Phase Magnitude Evolution (CSPME), which is a generalisation of the CSPE [SG06] to signals with exponential envelopes and which surpasses the nite resolution of the Discrete Fourier Transform. The encoding obtained is an intuitive representation of the string, suitable to musical processing

    Acoustic Speaker Localization with Strong Reverberation and Adaptive Feature Filtering with a Bayes RFS Framework

    Get PDF
    The thesis investigates the challenges of speaker localization in presence of strong reverberation, multi-speaker tracking, and multi-feature multi-speaker state filtering, using sound recordings from microphones. Novel reverberation-robust speaker localization algorithms are derived from the signal and room acoustics models. A multi-speaker tracking filter and a multi-feature multi-speaker state filter are developed based upon the generalized labeled multi-Bernoulli random finite set framework. Experiments and comparative studies have verified and demonstrated the benefits of the proposed methods

    Model-based Analysis and Processing of Speech and Audio Signals

    Get PDF
    corecore