8,372 research outputs found
A Construction Kit for Efficient Low Power Neural Network Accelerator Designs
Implementing embedded neural network processing at the edge requires
efficient hardware acceleration that couples high computational performance
with low power consumption. Driven by the rapid evolution of network
architectures and their algorithmic features, accelerator designs are
constantly updated and improved. To evaluate and compare hardware design
choices, designers can refer to a myriad of accelerator implementations in the
literature. Surveys provide an overview of these works but are often limited to
system-level and benchmark-specific performance metrics, making it difficult to
quantitatively compare the individual effect of each utilized optimization
technique. This complicates the evaluation of optimizations for new accelerator
designs, slowing-down the research progress. This work provides a survey of
neural network accelerator optimization approaches that have been used in
recent works and reports their individual effects on edge processing
performance. It presents the list of optimizations and their quantitative
effects as a construction kit, allowing to assess the design choices for each
building block separately. Reported optimizations range from up to 10'000x
memory savings to 33x energy reductions, providing chip designers an overview
of design choices for implementing efficient low power neural network
accelerators
I hear you eat and speak: automatic recognition of eating condition and food type, use-cases, and impact on ASR performance
We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient
Development of Speech Command Control Based TinyML System for Post-Stroke Dysarthria Therapy Device
Post-stroke dysarthria (PSD) is a widespread outcome of a stroke. To help in the objective evaluation of dysarthria, the development of pathological voice recognition and technology has a lot of attention. Soft robotics therapy devices have been received as an alternative rehabilitation and hand grasp assistance for improving activity daily living (ADL). Despite the significant progress in this field, most soft robotic therapy devices use a complex, bulky, lack of pathological voice recognition model, large computational power, and stationary controller. This study aims to develop a portable wirelessly multi-controller with a simulated dysarthric vowel speech in Bahasa Indonesia and non-dysarthric micro speech recognition, using tiny machine learning (TinyMl) system for hardware efficiency. The speech interface using INMP441, compute with a lightweight Deep Convolutional Neural network (DCNN) design and embedded into ESP-32. Feature model using Short Time Fourier Transform (STFT) and fed into CNN. This method has proven useful in micro-speech recognition with low computational power in both speech scenarios with a level of accuracy above 90%. Realtime inference performance on ESP-32 using hand prosthetics, with 3-level household noise intensity respectively 24db,42db, and 62db, and has respectively resulted from 95%, 85%, and 50% Accuracy. Wireless connectivity success rate with both controllers is around 0.2 - 0.5 ms
Aerospace medicine and biology. A continuing bibliography with indexes, supplement 195
This bibliography lists 148 reports, articles, and other documents introduced into the NASA scientific and technical information system in June 1979
A Review of Deep Learning Techniques for Speech Processing
The field of speech processing has undergone a transformative shift with the
advent of deep learning. The use of multiple processing layers has enabled the
creation of models capable of extracting intricate features from speech data.
This development has paved the way for unparalleled advancements in speech
recognition, text-to-speech synthesis, automatic speech recognition, and
emotion recognition, propelling the performance of these tasks to unprecedented
heights. The power of deep learning techniques has opened up new avenues for
research and innovation in the field of speech processing, with far-reaching
implications for a range of industries and applications. This review paper
provides a comprehensive overview of the key deep learning models and their
applications in speech-processing tasks. We begin by tracing the evolution of
speech processing research, from early approaches, such as MFCC and HMM, to
more recent advances in deep learning architectures, such as CNNs, RNNs,
transformers, conformers, and diffusion models. We categorize the approaches
and compare their strengths and weaknesses for solving speech-processing tasks.
Furthermore, we extensively cover various speech-processing tasks, datasets,
and benchmarks used in the literature and describe how different deep-learning
networks have been utilized to tackle these tasks. Additionally, we discuss the
challenges and future directions of deep learning in speech processing,
including the need for more parameter-efficient, interpretable models and the
potential of deep learning for multimodal speech processing. By examining the
field's evolution, comparing and contrasting different approaches, and
highlighting future directions and challenges, we hope to inspire further
research in this exciting and rapidly advancing field
In-Ear-Voice: Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms
The recent ubiquitous adoption of remote conferencing has been accompanied by
omnipresent frustration with distorted or otherwise unclear voice
communication. Audio enhancement can compensate for low-quality input signals
from, for example, small true wireless earbuds, by applying noise suppression
techniques. Such processing relies on voice activity detection (VAD) with low
latency and the added capability of discriminating the wearer's voice from
others - a task of significant computational complexity. The tight energy
budget of devices as small as modern earphones, however, requires any system
attempting to tackle this problem to do so with minimal power and processing
overhead, while not relying on speaker-specific voice samples and training due
to usability concerns.
This paper presents the design and implementation of a custom research
platform for low-power wireless earbuds based on novel, commercial, MEMS
bone-conduction microphones. Such microphones can record the wearer's speech
with much greater isolation, enabling personalized voice activity detection and
further audio enhancement applications. Furthermore, the paper accurately
evaluates a proposed low-power personalized speech detection algorithm based on
bone conduction data and a recurrent neural network running on the implemented
research platform. This algorithm is compared to an approach based on
traditional microphone input. The performance of the bone conduction system,
achieving detection of speech within 12.8ms at an accuracy of 95\% is
evaluated. Different SoC choices are contrasted, with the final implementation
based on the cutting-edge Ambiq Apollo 4 Blue SoC achieving 2.64mW average
power consumption at 14uJ per inference, reaching 43h of battery life on a
miniature 32mAh li-ion cell and without duty cycling
Privacy-Protecting Techniques for Behavioral Data: A Survey
Our behavior (the way we talk, walk, or think) is unique and can be used as a biometric trait. It also correlates with sensitive attributes like emotions. Hence, techniques to protect individuals privacy against unwanted inferences are required. To consolidate knowledge in this area, we systematically reviewed applicable anonymization techniques. We taxonomize and compare existing solutions regarding privacy goals, conceptual operation, advantages, and limitations. Our analysis shows that some behavioral traits (e.g., voice) have received much attention, while others (e.g., eye-gaze, brainwaves) are mostly neglected. We also find that the evaluation methodology of behavioral anonymization techniques can be further improved
Efficient Approaches for Voice Change and Voice Conversion Systems
In this thesis, the study and design of Voice Change and Voice Conversion systems are
presented. Particularly, a voice change system manipulates a speaker’s voice to be perceived
as it is not spoken by this speaker; and voice conversion system modifies a speaker’s voice,
such that it is perceived as being spoken by a target speaker.
This thesis mainly includes two sub-parts. The first part is to develop a low latency and low
complexity voice change system (i.e. includes frequency/pitch scale modification and formant
scale modification algorithms), which can be executed on the smartphones in 2012 with very
limited computational capability. Although some low-complexity voice change algorithms
have been proposed and studied, the real-time implementations are very rare. According to the
experimental results, the proposed voice change system achieves the same quality as the
baseline approach but requires much less computational complexity and satisfies the
requirement of real-time. Moreover, the proposed system has been implemented in C
language and was released as a commercial software application. The second part of this
thesis is to investigate a novel low-complexity voice conversion system (i.e. from a source
speaker A to a target speaker B) that improves the perceptual quality and identity without
introducing large processing latencies. The proposed scheme directly manipulates the
spectrum using an effective and physically motivated method – Continuous Frequency
Warping and Magnitude Scaling (CFWMS) to guarantee high perceptual naturalness and
quality. In addition, a trajectory limitation strategy is proposed to prevent the frame-by-frame
discontinuity to further enhance the speech quality. The experimental results show that the
proposed method outperforms the conventional baseline solutions in terms of either objective
tests or subjective tests
Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives
Over the past few years, adversarial training has become an extremely active
research topic and has been successfully applied to various Artificial
Intelligence (AI) domains. As a potentially crucial technique for the
development of the next generation of emotional AI systems, we herein provide a
comprehensive overview of the application of adversarial training to affective
computing and sentiment analysis. Various representative adversarial training
algorithms are explained and discussed accordingly, aimed at tackling diverse
challenges associated with emotional AI systems. Further, we highlight a range
of potential future research directions. We expect that this overview will help
facilitate the development of adversarial training for affective computing and
sentiment analysis in both the academic and industrial communities
- …