18 research outputs found
Incorporating prior information in nonnegative matrix factorization for audio source separation
In this work, we propose solutions to the problem of audio source separation from a single recording. The audio source signals can be speech, music or any other audio signals. We assume training data for the individual source signals that are present in the mixed signal are available. The training data are used to build a representative model for each source. In most cases, these models are sets of basis vectors in magnitude or power spectral domain. The proposed algorithms basically depend on decomposing the spectrogram of the mixed signal with the trained basis models for all observed sources in the mixed signal. Nonnegative matrix factorization (NMF) is used to train the basis models for the source signals. NMF is then used to decompose the mixed signal spectrogram as a weighted linear combination of the trained basis vectors for each observed source in the mixed signal. After decomposing the mixed signal, spectral masks are built and used to reconstruct the source signals. In this thesis, we improve the performance of NMF for source separation by incorporating more constraints and prior information related to the source signals to the NMF decomposition results. The NMF decomposition weights are encouraged to satisfy some prior information that is related to the nature of the source signals. The priors are modeled using Gaussian mixture models or hidden Markov models. These priors basically represent valid weight combination sequences that the basis vectors can receive for a certain type of source signal. The prior models are incorporated with the NMF cost function using either log-likelihood or minimum mean squared error estimation (MMSE). We also incorporate the prior information as a post processing. We incorporate the smoothness prior on the NMF solutions by using post smoothing processing. We also introduce post enhancement using MMSE estimation to obtain better separation for the source signals. In this thesis, we also improve the NMF training for the basis models. In cases when enough training data are not available, we introduce two di erent adaptation methods for the trained basis to better t the sources in the mixed signal. We also improve the training procedures for the sources by learning more discriminative dictionaries for the source signals. In addition, to consider a larger context in the models, we concatenate neighboring spectra together and train basis sets from them instead of a single frame which makes it possible to directly model the relation between consequent spectral frames. Experimental results show that the proposed approaches improve the performance of using NMF in source separation applications
Pitch-Informed Solo and Accompaniment Separation
Das Thema dieser Dissertation ist die Entwicklung eines Systems zur
Tonhöhen-informierten Quellentrennung von Musiksignalen in Soloinstrument
und Begleitung. Dieses ist geeignet, die dominanten Instrumente aus einem
Musikstück zu isolieren, unabhängig von der Art des Instruments, der
Begleitung und Stilrichtung. Dabei werden nur einstimmige
Melodieinstrumente in Betracht gezogen. Die Musikaufnahmen liegen monaural
vor, es kann also keine zusätzliche Information aus der Verteilung der
Instrumente im Stereo-Panorama gewonnen werden.
Die entwickelte Methode nutzt Tonhöhen-Information als Basis für eine
sinusoidale Modellierung der spektralen Eigenschaften des Soloinstruments
aus dem Musikmischsignal. Anstatt die spektralen Informationen pro Frame zu
bestimmen, werden in der vorgeschlagenen Methode Tonobjekte für die
Separation genutzt. Tonobjekt-basierte Verarbeitung ermöglicht es,
zusätzlich die Notenanfänge zu verfeinern, transiente Artefakte zu
reduzieren, gemeinsame Amplitudenmodulation (Common Amplitude Modulation
CAM) einzubeziehen und besser nichtharmonische Elemente der Töne
abzuschätzen. Der vorgestellte Algorithmus zur Quellentrennung von
Soloinstrument und Begleitung ermöglicht eine Echtzeitverarbeitung und ist
somit relevant für den praktischen Einsatz.
Ein Experiment zur besseren Modellierung der Zusammenhänge zwischen
Magnitude, Phase und Feinfrequenz von isolierten Instrumententönen wurde
durchgeführt. Als Ergebnis konnte die Kontinuität der zeitlichen
Einhüllenden, die Inharmonizität bestimmter Musikinstrumente und die
Auswertung des Phasenfortschritts für die vorgestellte Methode ausgenutzt
werden. Zusätzlich wurde ein Algorithmus für die Quellentrennung in
perkussive und harmonische Signalanteile auf Basis des Phasenfortschritts
entwickelt. Dieser erreicht ein verbesserte perzeptuelle Qualität der
harmonischen und perkussiven Signale gegenüber vergleichbaren Methoden nach
dem Stand der Technik.
Die vorgestellte Methode zur Klangquellentrennung in Soloinstrument und
Begleitung wurde zu den Evaluationskampagnen SiSEC 2011 und SiSEC 2013
eingereicht. Dort konnten vergleichbare Ergebnisse im Hinblick auf
perzeptuelle Bewertungsmaße erzielt werden. Die Qualität eines
Referenzalgorithmus im Hinblick auf den in dieser Dissertation
beschriebenen Instrumentaldatensatz übertroffen werden.
Als ein Anwendungsszenario für die Klangquellentrennung in Solo und
Begleitung wurde ein Hörtest durchgeführt, der die Qualitätsanforderungen
an Quellentrennung im Kontext von Musiklernsoftware bewerten sollte. Die
Ergebnisse dieses Hörtests zeigen, dass die Solo- und Begleitspur gemäß
unterschiedlicher Qualitätskriterien getrennt werden sollten. Die
Musiklernsoftware Songs2See integriert die vorgestellte
Klangquellentrennung bereits in einer kommerziell erhältlichen Anwendung.This thesis addresses the development of a system for pitch-informed solo
and accompaniment separation capable of separating main instruments from
music accompaniment regardless of the musical genre of the track, or type
of music accompaniment. For the solo instrument, only pitched monophonic
instruments were considered in a single-channel scenario where no panning
or spatial location information is available.
In the proposed method, pitch information is used as an initial stage of a
sinusoidal modeling approach that attempts to estimate the spectral
information of the solo instrument from a given audio mixture. Instead of
estimating the solo instrument on a frame by frame basis, the proposed
method gathers information of tone objects to perform separation.
Tone-based processing allowed the inclusion of novel processing stages for
attack refinement, transient interference reduction, common amplitude
modulation (CAM) of tone objects, and for better estimation of non-harmonic
elements that can occur in musical instrument tones. The proposed solo and
accompaniment algorithm is an efficient method suitable for real-world
applications.
A study was conducted to better model magnitude, frequency, and phase of
isolated musical instrument tones. As a result of this study, temporal
envelope smoothness, inharmonicty of musical instruments, and phase
expectation were exploited in the proposed separation method. Additionally,
an algorithm for harmonic/percussive separation based on phase expectation
was proposed. The algorithm shows improved perceptual quality with respect
to state-of-the-art methods for harmonic/percussive separation.
The proposed solo and accompaniment method obtained perceptual quality
scores comparable to other state-of-the-art algorithms under the SiSEC 2011
and SiSEC 2013 campaigns, and outperformed the comparison algorithm on the
instrumental dataset described in this thesis.As a use-case of solo and
accompaniment separation, a listening test procedure was conducted to
assess separation quality requirements in the context of music education.
Results from the listening test showed that solo and accompaniment tracks
should be optimized differently to suit quality requirements of music
education. The Songs2See application was presented as commercial music
learning software which includes the proposed solo and accompaniment
separation method
Martian time-series unraveled: A multi-scale nested approach with factorial variational autoencoders
Unsupervised source separation involves unraveling an unknown set of source
signals recorded through a mixing operator, with limited prior knowledge about
the sources, and only access to a dataset of signal mixtures. This problem is
inherently ill-posed and is further challenged by the variety of time-scales
exhibited by sources in time series data. Existing methods typically rely on a
preselected window size that limits their capacity to handle multi-scale
sources. To address this issue, instead of operating in the time domain, we
propose an unsupervised multi-scale clustering and source separation framework
by leveraging wavelet scattering covariances that provide a low-dimensional
representation of stochastic processes, capable of distinguishing between
different non-Gaussian stochastic processes. Nested within this representation
space, we develop a factorial Gaussian-mixture variational autoencoder that is
trained to (1) probabilistically cluster sources at different time-scales and
(2) independently sample scattering covariance representations associated with
each cluster. Using samples from each cluster as prior information, we
formulate source separation as an optimization problem in the wavelet
scattering covariance representation space, resulting in separated sources in
the time domain. When applied to seismic data recorded during the NASA InSight
mission on Mars, our multi-scale nested approach proves to be a powerful tool
for discriminating between sources varying greatly in time-scale, e.g.,
minute-long transient one-sided pulses (known as ``glitches'') and structured
ambient noises resulting from atmospheric activities that typically last for
tens of minutes. These results provide an opportunity to conduct further
investigations into the isolated sources related to atmospheric-surface
interactions, thermal relaxations, and other complex phenomena
L'analyse probabiliste en composantes latentes et ses adaptations aux signaux musicaux : application à la transcription automatique de musique et à la séparation de sources
Automatic music transcription consists in automatically estimating the notes in a recording, through three attributes: onset time, duration and pitch. To address this problem, there is a class of methods which is based on the modeling of a signal as a sum of basic elements, carrying symbolic information. Among these analysis techniques, one can find the probabilistic latent component analysis (PLCA). The purpose of this thesis is to propose variants and improvements of the PLCA, so that it can better adapt to musical signals and th us better address the problem of transcription. To this aim, a first approach is to put forward new models of signals, instead of the inherent model 0 PLCA, expressive enough so they can adapt to musical notes having variations of both pitch and spectral envelope over time. A second aspect of this work is to provide tools to help the parameters estimation algorithm to converge towards meaningful solutions through the incorporation of prior knowledge about the signals to be analyzed, as weil as a new dynamic model. Ali the devised algorithms are applie to the task of automatic transcription. They can also be directly used for source separation, which consists in separating several sources from a mixture, and Iwo applications are put forward in this directionLa transcription automatique de musique polyphonique consiste à estimer automatiquernent les notes présentes dans un enregistrement via trois de leurs attributs : temps d'attaque, durée et hauteur. Pour traiter ce problème, il existe une classe de méthodes dont le principe est de modéliser un signal comme une somme d'éléments de base, porteurs d'informations symboliques. Parmi ces techniques d'analyse, on trouve l'analyse probabiliste en composantes latentes (PLCA). L'objet de cette thèse est de proposer des variantes et des améliorations de la PLCA afin qu'elle puisse mieux s'adapter aux signaux musicaux et ainsi mieux traiter le problème de la transcription. Pour cela, un premier angle d'approche est de proposer de nouveaux modèles de signaux, en lieu et place du modèle inhérent à la PLCA, suffisamment expressifs pour pouvoir s'adapter aux notes de musique possédant simultanément des variations temporelles de fréquence fondamentale et d'enveloppe spectrale. Un deuxième aspect du travail effectué est de proposer des outils permettant d'aider l'algorithme d'estimation des paramètres à converger vers des solutions significatives via l'incorporation de connaissances a priori sur les signaux à analyser, ainsi que d'un nouveau modèle dynamique. Tous les algorithmes ainsi imaginés sont appliqués à la tâche de transcription automatique. Nous voyons également qu'ils peuvent être directement utilisés pour la séparation de sources, qui consiste à séparer plusieurs sources d'un mélange, et nous proposons deux applications dans ce sens
Speech Recognition
Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes
Deep Learning for Distant Speech Recognition
Deep learning is an emerging technology that is considered one of the most
promising directions for reaching higher levels of artificial intelligence.
Among the other achievements, building computers that understand speech
represents a crucial leap towards intelligent machines. Despite the great
efforts of the past decades, however, a natural and robust human-machine speech
interaction still appears to be out of reach, especially when users interact
with a distant microphone in noisy and reverberant environments. The latter
disturbances severely hamper the intelligibility of a speech signal, making
Distant Speech Recognition (DSR) one of the major open challenges in the field.
This thesis addresses the latter scenario and proposes some novel techniques,
architectures, and algorithms to improve the robustness of distant-talking
acoustic models. We first elaborate on methodologies for realistic data
contamination, with a particular emphasis on DNN training with simulated data.
We then investigate on approaches for better exploiting speech contexts,
proposing some original methodologies for both feed-forward and recurrent
neural networks. Lastly, inspired by the idea that cooperation across different
DNNs could be the key for counteracting the harmful effects of noise and
reverberation, we propose a novel deep learning paradigm called network of deep
neural networks. The analysis of the original concepts were based on extensive
experimental validations conducted on both real and simulated data, considering
different corpora, microphone configurations, environments, noisy conditions,
and ASR tasks.Comment: PhD Thesis Unitn, 201
Proceedings of the 7th Sound and Music Computing Conference
Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010