87 research outputs found

    Defining Fundamental Frequency for Almost Harmonic Signals

    Full text link
    In this work, we consider the modeling of signals that are almost, but not quite, harmonic, i.e., composed of sinusoids whose frequencies are close to being integer multiples of a common frequency. Typically, in applications, such signals are treated as perfectly harmonic, allowing for the estimation of their fundamental frequency, despite the signals not actually being periodic. Herein, we provide three different definitions of a concept of fundamental frequency for such inharmonic signals and study the implications of the different choices for modeling and estimation. We show that one of the definitions corresponds to a misspecified modeling scenario, and provides a theoretical benchmark for analyzing the behavior of estimators derived under a perfectly harmonic assumption. The second definition stems from optimal mass transport theory and yields a robust and easily interpretable concept of fundamental frequency based on the signals' spectral properties. The third definition interprets the inharmonic signal as an observation of a randomly perturbed harmonic signal. This allows for computing a hybrid information theoretical bound on estimation performance, as well as for finding an estimator attaining the bound. The theoretical findings are illustrated using numerical examples.Comment: Accepted for publication in IEEE Transactions on Signal Processin

    Estimation of Fundamental Frequencies in Stereophonic Music Mixtures

    Get PDF

    Interpolation and Extrapolation of Toeplitz Matrices via Optimal Mass Transport

    Full text link
    In this work, we propose a novel method for quantifying distances between Toeplitz structured covariance matrices. By exploiting the spectral representation of Toeplitz matrices, the proposed distance measure is defined based on an optimal mass transport problem in the spectral domain. This may then be interpreted in the covariance domain, suggesting a natural way of interpolating and extrapolating Toeplitz matrices, such that the positive semi-definiteness and the Toeplitz structure of these matrices are preserved. The proposed distance measure is also shown to be contractive with respect to both additive and multiplicative noise, and thereby allows for a quantification of the decreased distance between signals when these are corrupted by noise. Finally, we illustrate how this approach can be used for several applications in signal processing. In particular, we consider interpolation and extrapolation of Toeplitz matrices, as well as clustering problems and tracking of slowly varying stochastic processes

    Automatic transcription of music using deep learning techniques

    Get PDF
    Music transcription is the problem of detecting notes that are being played in a musical piece. This is a difficult task that only trained people are capable of doing. Due to its difficulty, there have been a high interest in automate it. However, automatic music transcription encompasses several fields of research such as, digital signal processing, machine learning, music theory and cognition, pitch perception and psychoacoustics. All of this, makes automatic music transcription an hard problem to solve. In this work we present a novel approach of automatically transcribing piano musical pieces using deep learning techniques. We take advantage of deep learning techniques to build several classifiers, each one responsible for detecting only one musical note. In theory, this division of work would enhance the ability of each classifier to transcribe. Apart from that, we also apply two additional stages, pre-processing and post-processing, to improve the efficiency of our system. The pre-processing stage aims at improving the quality of the input data before the classification/transcription stage, while the post-processing aims at fixing errors originated during the classification stage. In the initial steps, preliminary experiments have been performed to fine tune our model, in both three stages: pre-processing, classification and post-processing. The experimental setup, using those optimized techniques and parameters, is shown and a comparison is given with other two state-of-the-art works that apply the same dataset as well as the same deep learning technique but using a different approach. By different approach we mean that a single neural network is used to detect all the musical notes rather than one neural network per each note. Our approach was able to surpass in frame-based metrics these works, while reaching close results in onset-based metrics, demonstrating the feasability of our approach

    The development of corpus-based computer assisted composition program and its application for instrumental music composition

    Get PDF
    In the last 20 years, we have seen the nourishing environment for the development of music software using a corpus of audio data expanding significantly, namely that synthesis techniques producing electronic sounds, and supportive tools for creative activities are the driving forces to the growth. Some software produces a sequence of sounds by means of synthesizing a chunk of source audio data retrieved from an audio database according to a rule. Since the matching of sources is processed according to their descriptive features extracted by FFT analysis, the quality of the result is significantly influenced by the outcomes of the Audio Analysis, Segmentation, and Decomposition. Also, the synthesis process often requires a considerable amount of sample data and this can become an obstacle to establish easy, inexpensive, and user-friendly applications on various kinds of devices. Therefore, it is crucial to consider how to treat the data and construct an efficient database for the synthesis. We aim to apply corpusbased synthesis techniques to develop a Computer Assisted Composition program, and to investigate the actual application of the program on ensemble pieces. The goal of this research is to apply the program to the instrumental music composition, refine its function, and search new avenues for innovative compositional method

    Impulse Response Interpolation via Optimal Transport

    Get PDF
    Interpolation between multiple room impulse responses is often necessary for dynamic auralization of virtual acoustic environments, in which a listener can move with six degrees-of-freedom. The spatial room impulse response (SRIR) represents the combined effects of the surround room as sound propagates from a source to the listener and varies as the source or listener positions change. The early portion of the SRIR contains sparse reflections, considered to be distinct sound events, that tend to be impaired with interpolation methods based on simple linear combinations. With parametric processing of SRIRs, corresponding sound events are able to be mapped to one another and produce a more physically accurate spatiotemporal interpolation of the early portion of the SRIR. In this thesis, a novel method for parametric SRIR interpolation is proposed based on the principle of optimal transportation. First, SRIRs are represented as point clouds of sound pressure in a 3D virtual source space. Mappings between two point clouds are obtained by defining a partial optimal transport problem problem, solvable with familiar linear programming techniques. The partial relaxation is implemented by permitting both point-to-point mappings and dummy mappings. The obtained optimal transport plan is used to compute the interpolated point cloud which is converted back to an SRIR. Testing of the proposed method against three baseline comparison methods was done with SRIRs generated by geometrical acoustical modeling. An error metric based on the difference in energy between low-passed rendering of the omnidirectional room impulse response was used. Statistical results indicate that the proposed method consistently outperforms the baseline methods of interpolation. Qualitative examination of the mapping methods confirms that partial transport produces more physically accurate spatiotemporal mappings. For future work, it is suggested to consider different cost functions, interpolate between measured SRIRs, and to render the responses to allow perceptual tests

    Investigating the build-up of precedence effect using reflection masking

    Get PDF
    The auditory processing level involved in the build‐up of precedence [Freyman et al., J. Acoust. Soc. Am. 90, 874–884 (1991)] has been investigated here by employing reflection masked threshold (RMT) techniques. Given that RMT techniques are generally assumed to address lower levels of the auditory signal processing, such an approach represents a bottom‐up approach to the buildup of precedence. Three conditioner configurations measuring a possible buildup of reflection suppression were compared to the baseline RMT for four reflection delays ranging from 2.5–15 ms. No buildup of reflection suppression was observed for any of the conditioner configurations. Buildup of template (decrease in RMT for two of the conditioners), on the other hand, was found to be delay dependent. For five of six listeners, with reflection delay=2.5 and 15 ms, RMT decreased relative to the baseline. For 5‐ and 10‐ms delay, no change in threshold was observed. It is concluded that the low‐level auditory processing involved in RMT is not sufficient to realize a buildup of reflection suppression. This confirms suggestions that higher level processing is involved in PE buildup. The observed enhancement of reflection detection (RMT) may contribute to active suppression at higher processing levels

    Automatic musical instrument recognition for multimedia indexing

    Get PDF
    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia InformáticaThe subject of automatic indexing of multimedia has been a target of numerous discussion and study. This interest is due to the exponential growth of multimedia content and the subsequent need to create methods that automatically catalogue this data. To fulfil this idea, several projects and areas of study have emerged. The most relevant of these are the MPEG-7 standard, which defines a standardized system for the representation and automatic extraction of information present in the content, and Music Information Retrieval (MIR), which gathers several paradigms and areas of study relating to music. The main approach to this indexing problem relies on analysing data to obtain and identify descriptors that can help define what we intend to recognize (as, for instance,musical instruments, voice, facial expressions, and so on), this then provides us with information we can use to index the data. This dissertation will focus on audio indexing in music, specifically regarding the recognition of musical instruments from recorded musical notes. Moreover, the developed system and techniques will also be tested for the recognition of ambient sounds (such as the sound of running water, cars driving by, and so on). Our approach will use non-negative matrix factorization to extract features from various types of sounds, these will then be used to train a classification algorithm that will be then capable of identifying new sounds
    corecore