489 research outputs found
Template Adaptation for Improving Automatic Music Transcription
In this work, we propose a system for automatic music transcription which adapts dictionary templates so that they closely match the spectral shape of the instrument sources present in each recording. Current dictionary-based automatic transcription systems keep the input dictionary fixed, thus the spectral shape of the dictionary components might not match the shape of the test instrument sources. By performing a conservative transcription pre-processing step, the spectral shape of detected notes can be extracted and utilized in order to adapt the template dictionary. We propose two variants for adaptive transcription, namely for single-instrument transcription and for multiple-instrument transcription. Experiments are carried out using the MAPS and Bach10 databases. Results in terms of multi-pitch detection and instrument assignment show that there is a clear and consistent improvement when adapting the dictionary in contrast with keeping the dictionary fixed
Polyphonic Sound Event Tracking Using Linear Dynamical Systems
In this paper, a system for polyphonic sound event detection and tracking is proposed, based on spectrogram factorisation techniques and state space models. The system extends probabilistic latent component analysis (PLCA) and is modelled around a 4-dimensional spectral template dictionary of frequency, sound event class, exemplar index, and sound state. In order to jointly track multiple overlapping sound events over time, the integration of linear dynamical systems (LDS) within the PLCA inference is proposed. The system assumes that the PLCA sound event activation is the (noisy) observation in an LDS, with the latent states corresponding to the true event activations. LDS training is achieved using fully observed data, making use of ground truth-informed event activations produced by the PLCA-based model. Several LDS variants are evaluated, using polyphonic datasets of office sounds generated from an acoustic scene simulator, as well as real and synthesized monophonic datasets for comparative purposes. Results show that the integration of LDS tracking within PLCA leads to an improvement of +8.5-10.5% in terms of frame-based F-measure as compared to the use of the PLCA model alone. In addition, the proposed system outperforms several state-of-the-art methods for the task of polyphonic sound event detection
Recommended from our members
A Shift-Invariant Latent Variable Model for Automatic Music Transcription
In this work, a probabilistic model for multiple-instrument automatic music transcription is proposed. The model extends the shift-invariant probabilistic latent component analysis method, which is used for spectrogram factorization. Proposed extensions support the use of multiple spectral templates per pitch and per instrument source, as well as a time-varying pitch contribution for each source. Thus, this method can effectively be used for multiple-instrument automatic transcription. In addition, the shift-invariant aspect of the method can be exploited for detecting tuning changes and frequency modulations, as well as for visualizing pitch content. For note tracking and smoothing, pitch-wise hidden Markov models are used. For training, pitch templates from eight orchestral instruments were extracted, covering their complete note range. The transcription system was tested on multiple-instrument polyphonic recordings from the RWC database, a Disklavier data set, and the MIREX 2007 multi-F0 data set. Results demonstrate that the proposed method outperforms leading approaches from the transcription literature, using several error metrics
An evaluation framework for event detection using a morphological model of acoustic scenes
This paper introduces a model of environmental acoustic scenes which adopts a morphological approach by ab-stracting temporal structures of acoustic scenes. To demonstrate its potential, this model is employed to evaluate the performance of a large set of acoustic events detection systems. This model allows us to explicitly control key morphological aspects of the acoustic scene and isolate their impact on the performance of the system under evaluation. Thus, more information can be gained on the behavior of evaluated systems, providing guidance for further improvements. The proposed model is validated using submitted systems from the IEEE DCASE Challenge; results indicate that the proposed scheme is able to successfully build datasets useful for evaluating some aspects the performance of event detection systems, more particularly their robustness to new listening conditions and the increasing level of background sounds.Research project partly funded by ANR-11-JS03-005-01
Recommended from our members
Multiple-instrument polyphonic music transcription using a temporally constrained shift-invariant model
A method for automatic transcription of polyphonic music is proposed in this work that models the temporal evolution of musical tones. The model extends the shift-invariant probabilistic latent component analysis method by supporting the use of spectral templates that correspond to sound states such as attack, sustain, and decay. The order of these templates is controlled using hidden Markov model-based temporal constraints. In addition, the model can exploit multiple templates per pitch and instrument source. The shift-invariant aspect of the model makes it suitable for music signals that exhibit frequency modulations or tuning changes. Pitch-wise hidden Markov models are also utilized in a postprocessing step for note tracking. For training, sound state templates were extracted for various orchestral instruments using isolated note samples. The proposed transcription system was tested on multiple-instrument recordings from various datasets. Experimental results show that the proposed model is superior to a non-temporally constrained model and also outperforms various state-of-the-art transcription systems for the same experiment
Left ventricular ejection time, not heart rate, is an independent correlate of aortic pulse wave velocity.
Salvi P, Palombo C, Salvi GM, Labat C, Parati G, Benetos A. Left
ventricular ejection time, not heart rate, is an independent correlate of
aortic pulse wave velocity. J Appl Physiol 115: 1610–1617, 2013. First
published September 19, 2013; doi:10.1152/japplphysiol.00475.2013.—
Several studies showed a positive association between heart rate and
pulse wave velocity, a sensitive marker of arterial stiffness. However, no
study involving a large population has specifically addressed the dependence
of pulse wave velocity on different components of the cardiac
cycle. The aim of this study was to explore in subjects of different age the
link between pulse wave velocity with heart period (the reciprocal of
heart rate) and the temporal components of the cardiac cycle such as left
ventricular ejection time and diastolic time. Carotid-femoral pulse wave
velocity was assessed in 3,020 untreated subjects (1,107 men). Heart
period, left ventricular ejection time, diastolic time, and early-systolic
dP/dt were determined by carotid pulse wave analysis with high-fidelity
applanation tonometry. An inverse association was found between pulse
wave velocity and left ventricular ejection time at all ages (25 years,
r2 0.043; 25–44 years, r2 0.103; 45–64 years, r2 0.079; 65–84
years, r2 0.044; 85 years, r2 0.022; P 0.0001 for all). A
significant (P 0.0001) negative but always weaker correlation between
pulse wave velocity and heart period was also found, with the exception
of the youngest subjects (P0.20). A significant positive correlation was
also found between pulse wave velocity and dP/dt (P 0.0001). With
multiple stepwise regression analysis, left ventricular ejection time and
dP/dt remained the only determinant of pulse wave velocity at all ages,
whereas the contribution of heart period no longer became significant.
Our data demonstrate that pulse wave velocity is more closely related to
left ventricular systolic function than to heart period. This may have
methodological and pathophysiological implications
Recommended from our members
Comparison of subspace analysis-based and statistical model-based algorithms for musical instrument classification
- …