137,603 research outputs found
Speech Processing in Computer Vision Applications
Deep learning has been recently proven to be a viable asset in determining features in the field of Speech Analysis. Deep learning methods like Convolutional Neural Networks facilitate the expansion of specific feature information in waveforms, allowing networks to create more feature dense representations of data. Our work attempts to address the problem of re-creating a face given a speaker\u27s voice and speaker identification using deep learning methods. In this work, we first review the fundamental background in speech processing and its related applications. Then we introduce novel deep learning-based methods to speech feature analysis. Finally, we will present our deep learning approaches to speaker identification and speech to face synthesis. The presented method can convert a speaker audio sample to an image of their predicted face. This framework is composed of several chained together networks, each with an essential step in the conversion process. These include Audio embedding, encoding, and face generation networks, respectively. Our experiments show that certain features can map to the face and that with a speaker\u27s voice, DNNs can create their face and that a GUI could be used in conjunction to display a speaker recognition network\u27s data
Proposing a hybrid approach for emotion classification using audio and video data
Emotion recognition has been a research topic in the field of Human-Computer Interaction (HCI) during recent years. Computers have become an inseparable part of human life. Users need human-like interaction to better communicate with computers. Many researchers have
become interested in emotion recognition and classification using different sources. A hybrid
approach of audio and text has been recently introduced. All such approaches have been done to raise the accuracy and appropriateness of emotion classification. In this study, a hybrid approach of audio and video has been applied for emotion recognition. The innovation of this
approach is selecting the characteristics of audio and video and their features as a unique specification for classification. In this research, the SVM method has been used for classifying the data in the SAVEE database. The experimental results show the maximum classification
accuracy for audio data is 91.63% while by applying the hybrid approach the accuracy achieved is 99.26%
Lipreading with Long Short-Term Memory
Lipreading, i.e. speech recognition from visual-only recordings of a
speaker's face, can be achieved with a processing pipeline based solely on
neural networks, yielding significantly better accuracy than conventional
methods. Feed-forward and recurrent neural network layers (namely Long
Short-Term Memory; LSTM) are stacked to form a single structure which is
trained by back-propagating error gradients through all the layers. The
performance of such a stacked network was experimentally evaluated and compared
to a standard Support Vector Machine classifier using conventional computer
vision features (Eigenlips and Histograms of Oriented Gradients). The
evaluation was performed on data from 19 speakers of the publicly available
GRID corpus. With 51 different words to classify, we report a best word
accuracy on held-out evaluation speakers of 79.6% using the end-to-end neural
network-based solution (11.6% improvement over the best feature-based solution
evaluated).Comment: Accepted for publication at ICASSP 201
Speaker emotion can affect ambiguity production
Does speaker emotion affect degree of ambiguity in referring expressions? We used referential communication tasks preceded by mood induction to examine whether positive emotional valence may be linked to ambiguity of referring expressions. In Experiment 1, participants had to identify sequences of objects with homophonic labels (e.g., the animal bat, a baseball bat) for hypothetical addressees. This required modification of the homophones. Happy speakers were less likely to modify the second homophone to repair a temporary ambiguity (i.e., they were less likely to say … first cover the bat, then cover the baseball bat …). In Experiment 2, participants had to identify one of two identical objects in an object array, which required a modifying relative clause (the shark that's underneath the shoe). Happy speakers omitted the modifying relative clause twice as often as neutral speakers (e.g., by saying Put the shark underneath the sheep), thereby rendering the entire utterance ambiguous in the context of two sharks. The findings suggest that one consequence of positive mood appears to be more ambiguity in speech. This effect is hypothesised to be due to a less effortful processing style favouring an egocentric bias impacting perspective taking or monitoring of alignment of utterances with an addressee's perspective
Syllable classification using static matrices and prosodic features
In this paper we explore the usefulness of prosodic features for
syllable classification. In order to do this, we represent the
syllable as a static analysis unit such that its acoustic-temporal
dynamics could be merged into a set of features that the SVM
classifier will consider as a whole. In the first part of our
experiment we used MFCC as features for classification,
obtaining a maximum accuracy of 86.66%. The second part of
our study tests whether the prosodic information is
complementary to the cepstral information for syllable
classification. The results obtained show that combining the
two types of information does improve the classification, but
further analysis is necessary for a more successful
combination of the two types of features
- …