407 research outputs found
Real-Time Detection of Musical Onsets with Linear Prediction and Sinusoidal Modelling
Real-time musical note onset detection plays a vital role in many audio
analysis processes, such as score following, beat detection and various sound
synthesis by analysis methods. This paper provides a review of some of the
most commonly used techniques for real-time onset detection. We suggest
ways to improve these techniques by incorporating linear prediction, as well
as presenting a novel algorithm for real-time onset detection using sinusoidal
modelling. We provide comprehensive results for both the detection accuracy
and the computational performance of all of the described techniques,
evaluated using Modal, our new open source library for musical onset detection,
which comes with a free database of samples with hand-labelled note
onsets
Metamorph: Real-Time High-Level Sound Transformations Based On A Sinusoids Plus Noise Plus Transients Model
Spectral models provide ways to manipulate musical audio signals that can be both powerful and intuitive, but high-level control is often required in order to provide flexible real-time control over the potentially large parameter set. This paper introduces Metamorph, a new open source library for high-level sound transformation. We
describe the real-time sinusoids plus noise plus transients model that is used by Metamorph and explain the opportunities that it provides for sound manipulation
Sound Source Separation
This is the author's accepted pre-print of the article, first published as G. Evangelista, S. Marchand, M. D. Plumbley and E. Vincent. Sound source separation. In U. Zölzer (ed.), DAFX: Digital Audio Effects, 2nd edition, Chapter 14, pp. 551-588. John Wiley & Sons, March 2011. ISBN 9781119991298. DOI: 10.1002/9781119991298.ch14file: Proof:e\EvangelistaMarchandPlumbleyV11-sound.pdf:PDF owner: markp timestamp: 2011.04.26file: Proof:e\EvangelistaMarchandPlumbleyV11-sound.pdf:PDF owner: markp timestamp: 2011.04.2
Towards the automated analysis of simple polyphonic music : a knowledge-based approach
PhDMusic understanding is a process closely related to the knowledge and experience
of the listener. The amount of knowledge required is relative to the
complexity of the task in hand.
This dissertation is concerned with the problem of automatically decomposing
musical signals into a score-like representation. It proposes that, as
with humans, an automatic system requires knowledge about the signal and
its expected behaviour to correctly analyse music.
The proposed system uses the blackboard architecture to combine the
use of knowledge with data provided by the bottom-up processing of the
signal's information. Methods are proposed for the estimation of pitches,
onset times and durations of notes in simple polyphonic music.
A method for onset detection is presented. It provides an alternative to
conventional energy-based algorithms by using phase information. Statistical
analysis is used to create a detection function that evaluates the expected
behaviour of the signal regarding onsets.
Two methods for multi-pitch estimation are introduced. The first concentrates
on the grouping of harmonic information in the frequency-domain.
Its performance and limitations emphasise the case for the use of high-level
knowledge.
This knowledge, in the form of the individual waveforms of a single
instrument, is used in the second proposed approach. The method is based
on a time-domain linear additive model and it presents an alternative to
common frequency-domain approaches.
Results are presented and discussed for all methods, showing that, if
reliably generated, the use of knowledge can significantly improve the quality
of the analysis.Joint Information Systems Committee (JISC) in the UK National Science Foundation (N.S.F.) in the United states. Fundacion Gran Mariscal Ayacucho in Venezuela
Real-time segmentation of the temporal evolution of musical sounds
Since the studies of Helmholtz, it has been known that the temporal evolution of musical sounds plays an important role
in our perception of timbre. The accurate temporal segmentation of musical sounds into regions with distinct characteristics
is therefore of interest to researchers in the field of timbre perception as well as to those working with different forms
of sound modelling and manipulation. Following recent work by Hajda (1996), Peeters (2004) and Caetano et al (2010),
this paper presents a new method for the automatic segmentation of the temporal evolution of isolated musical sounds in real-time. We define attack, sustain and release segments using cues from a combination of the amplitude envelope, the spectro- temporal evolution and a measurement of the stability of the sound that is derived from the onset detection function. We conclude with an evaluation of the method
Separation of musical sources and structure from single-channel polyphonic recordings
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
- …