3,991 research outputs found
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
Integrating Voice-Based Machine Learning Technology into Complex Home Environments
To demonstrate the value of machine learning based smart health technologies,
researchers have to deploy their solutions into complex real-world environments
with real participants. This gives rise to many, oftentimes unexpected,
challenges for creating technology in a lab environment that will work when
deployed in real home environments. In other words, like more mature
disciplines, we need solutions for what can be done at development time to
increase success at deployment time. To illustrate an approach and solutions,
we use an example of an ongoing project that is a pipeline of voice based
machine learning solutions that detects the anger and verbal conflicts of the
participants. For anonymity, we call it the XYZ system. XYZ is a smart health
technology because by notifying the participants of their anger, it encourages
the participants to better manage their emotions. This is important because
being able to recognize one's emotions is the first step to better managing
one's anger. XYZ was deployed in 6 homes for 4 months each and monitors the
emotion of the caregiver of a dementia patient. In this paper we demonstrate
some of the necessary steps to be accomplished during the development stage to
increase deployment time success, and show where continued work is still
necessary. Note that the complex environments arise both from the physical
world and from complex human behavior
Attention-Inspired Artificial Neural Networks for Speech Processing: A Systematic Review
Artificial Neural Networks (ANNs) were created inspired by the neural networks in the human brain and have been widely applied in speech processing. The application areas of ANN include: Speech recognition, speech emotion recognition, language identification, speech enhancement, and speech separation, amongst others. Likewise, given that speech processing performed by humans involves complex cognitive processes known as auditory attention, there has been a growing amount of papers proposing ANNs supported by deep learning algorithms in conjunction with some mechanism to achieve symmetry with the human attention process. However, while these ANN approaches include attention, there is no categorization of attention integrated into the deep learning algorithms and their relation with human auditory attention. Therefore, we consider it necessary to have a review of the different ANN approaches inspired in attention to show both academic and industry experts the available models for a wide variety of applications. Based on the PRISMA methodology, we present a systematic review of the literature published since 2000, in which deep learning algorithms are applied to diverse problems related to speech processing. In this paper 133 research works are selected and the following aspects are described: (i) Most relevant features, (ii) ways in which attention has been implemented, (iii) their hypothetical relationship with human attention, and (iv) the evaluation metrics used. Additionally, the four publications most related with human attention were analyzed and their strengths and weaknesses were determined
Deep Spoken Keyword Spotting:An Overview
Spoken keyword spotting (KWS) deals with the identification of keywords in
audio streams and has become a fast-growing technology thanks to the paradigm
shift introduced by deep learning a few years ago. This has allowed the rapid
embedding of deep KWS in a myriad of small electronic devices with different
purposes like the activation of voice assistants. Prospects suggest a sustained
growth in terms of social use of this technology. Thus, it is not surprising
that deep KWS has become a hot research topic among speech scientists, who
constantly look for KWS performance improvement and computational complexity
reduction. This context motivates this paper, in which we conduct a literature
review into deep spoken KWS to assist practitioners and researchers who are
interested in this technology. Specifically, this overview has a comprehensive
nature by covering a thorough analysis of deep KWS systems (which includes
speech features, acoustic modeling and posterior handling), robustness methods,
applications, datasets, evaluation metrics, performance of deep KWS systems and
audio-visual KWS. The analysis performed in this paper allows us to identify a
number of directions for future research, including directions adopted from
automatic speech recognition research and directions that are unique to the
problem of spoken KWS
Robust audio-visual person verification using Web-camera video
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 61-62).This thesis examines the challenge of robust audio-visual person verification using data recorded in multiple environments with various lighting conditions, irregular visual backgrounds, and diverse background noise. Audio-visual person verification could prove to be very useful in both physical and logical access control security applications, but only if it can perform well in a variety of environments. This thesis first examines the factors that affect video-only person verification performance, including recording environment, amount of training data, and type of facial feature used. We then combine scores from audio and video verification systems to create a multi-modal verification system and compare its accuracy with that of either single-mode system.by Daniel Schultz.M.Eng
- …