2 research outputs found
Unsupervised Phoneme and Word Discovery from Multiple Speakers using Double Articulation Analyzer and Neural Network with Parametric Bias
This paper describes a new unsupervised machine learning method for
simultaneous phoneme and word discovery from multiple speakers. Human infants
can acquire knowledge of phonemes and words from interactions with his/her
mother as well as with others surrounding him/her. From a computational
perspective, phoneme and word discovery from multiple speakers is a more
challenging problem than that from one speaker because the speech signals from
different speakers exhibit different acoustic features. This paper proposes an
unsupervised phoneme and word discovery method that simultaneously uses
nonparametric Bayesian double articulation analyzer (NPB-DAA) and deep sparse
autoencoder with parametric bias in hidden layer (DSAE-PBHL). We assume that an
infant can recognize and distinguish speakers based on certain other features,
e.g., visual face recognition. DSAE-PBHL is aimed to be able to subtract
speaker-dependent acoustic features and extract speaker-independent features.
An experiment demonstrated that DSAE-PBHL can subtract distributed
representations of acoustic signals, enabling extraction based on the types of
phonemes rather than on the speakers. Another experiment demonstrated that a
combination of NPB-DAA and DSAE-PB outperformed the available methods in
phoneme and word discovery tasks involving speech signals with Japanese vowel
sequences from multiple speakers.Comment: 21 pages. Submitte