25 research outputs found
Speech enhancement using deep dilated CNN
In recent years, deep learning has achieved great success in speech enhancement. However, there are two major limitations regarding existing works. First, the Bayesian framework is not adopted in many such deep-learning-based algorithms. In particular, the prior distribution for speech in the Bayesian framework has been shown useful by regularizing the output to be in the speech space, and thus improving the performance. Second, the majority of the existing methods operate on the frequency domain of the noisy speech, such as spectrogram and its variations. We propose a Bayesian speech enhancement framework, called BaWN (Bayesian WaveNet), which directly operates on raw audio samples. It adopts the recently announced WaveNet, which is shown to be effective in modeling conditional distributions of speech samples while generating natural speech. Experiments show that BaWN is able to recover clean and natural speech.
Multi-channel speech enhancement with ad-hoc sensors has been a challenging task. Speech model guided beamforming algorithms are able to recover natural sounding speech, but the speech models tend to be oversimplified to prevent the inference from becoming too complicated. On the other hand, deep learning based enhancement approaches are able to learn complicated speech distributions and perform efficient inference, but they are unable to deal with variable number of input channels. Also, deep learning approaches introduce a lot of errors, particularly in the presence of unseen noise types and settings. We have therefore proposed an enhancement framework called DeepBeam, which combines the two complementary classes of algorithms. DeepBeam introduces a beamforming filter to produce natural sounding speech, but the filter coefficients are determined with the help of a monaural speech enhancement neural network. Experiments on synthetic and real-world data show that DeepBeam is able to produce clean, dry and natural sounding speech, and is robust against unseen noise
Physics-Driven Diffusion Models for Impact Sound Synthesis from Videos
Modeling sounds emitted from physical object interactions is critical for
immersive perceptual experiences in real and virtual worlds. Traditional
methods of impact sound synthesis use physics simulation to obtain a set of
physics parameters that could represent and synthesize the sound. However, they
require fine details of both the object geometries and impact locations, which
are rarely available in the real world and can not be applied to synthesize
impact sounds from common videos. On the other hand, existing video-driven deep
learning-based approaches could only capture the weak correspondence between
visual content and impact sounds since they lack of physics knowledge. In this
work, we propose a physics-driven diffusion model that can synthesize
high-fidelity impact sound for a silent video clip. In addition to the video
content, we propose to use additional physics priors to guide the impact sound
synthesis procedure. The physics priors include both physics parameters that
are directly estimated from noisy real-world impact sound examples without
sophisticated setup and learned residual parameters that interpret the sound
environment via neural networks. We further implement a novel diffusion model
with specific training and inference strategies to combine physics priors and
visual information for impact sound synthesis. Experimental results show that
our model outperforms several existing systems in generating realistic impact
sounds. More importantly, the physics-based representations are fully
interpretable and transparent, thus enabling us to perform sound editing
flexibly.Comment: CVPR 2023. Project page:
https://sukun1045.github.io/video-physics-sound-diffusion
F0-consistent many-to-many non-parallel voice conversion via conditional autoencoder
Non-parallel many-to-many voice conversion remains an interesting but
challenging speech processing task. Many style-transfer-inspired methods such
as generative adversarial networks (GANs) and variational autoencoders (VAEs)
have been proposed. Recently, AutoVC, a conditional autoencoders (CAEs) based
method achieved state-of-the-art results by disentangling the speaker identity
and speech content using information-constraining bottlenecks, and it achieves
zero-shot conversion by swapping in a different speaker's identity embedding to
synthesize a new voice. However, we found that while speaker identity is
disentangled from speech content, a significant amount of prosodic information,
such as source F0, leaks through the bottleneck, causing target F0 to fluctuate
unnaturally. Furthermore, AutoVC has no control of the converted F0 and thus
unsuitable for many applications. In the paper, we modified and improved
autoencoder-based voice conversion to disentangle content, F0, and speaker
identity at the same time. Therefore, we can control the F0 contour, generate
speech with F0 consistent with the target speaker, and significantly improve
quality and similarity. We support our improvement through quantitative and
qualitative analysis
Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling
Uncertainty decomposition refers to the task of decomposing the total
uncertainty of a model into data (aleatoric) uncertainty, resulting from the
inherent complexity or ambiguity of the data, and model (epistemic)
uncertainty, resulting from the lack of knowledge in the model. Performing
uncertainty decomposition for large language models (LLMs) is an important step
toward improving the reliability, trustworthiness, and interpretability of
LLMs, but this research task is very challenging and remains unresolved. The
existing canonical method, Bayesian Neural Network (BNN), cannot be applied to
LLMs, because BNN requires training and ensembling multiple variants of models,
which is infeasible or prohibitively expensive for LLMs. In this paper, we
introduce an uncertainty decomposition framework for LLMs, called input
clarifications ensemble, which bypasses the need to train new models. Rather
than ensembling models with different parameters, our approach generates a set
of clarifications for the input, feeds them into the fixed LLMs, and ensembles
the corresponding predictions. We show that our framework shares a symmetric
decomposition structure with BNN. Empirical evaluations demonstrate that the
proposed framework provides accurate and reliable uncertainty quantification on
various tasks. Code will be made publicly available at
https://github.com/UCSB-NLP-Chang/llm_uncertainty .Comment: 15 pages, 3 figure
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Self-supervised learning (SSL) for rich speech representations has achieved
empirical success in low-resource Automatic Speech Recognition (ASR) and other
speech processing tasks, which can mitigate the necessity of a large amount of
transcribed speech and thus has driven a growing demand for on-device ASR and
other speech processing. However, advanced speech SSL models have become
increasingly large, which contradicts the limited on-device resources. This gap
could be more severe in multilingual/multitask scenarios requiring
simultaneously recognizing multiple languages or executing multiple speech
processing tasks. Additionally, strongly overparameterized speech SSL models
tend to suffer from overfitting when being finetuned on low-resource speech
corpus. This work aims to enhance the practical usage of speech SSL models
towards a win-win in both enhanced efficiency and alleviated overfitting via
our proposed S-Router framework, which for the first time discovers that
simply discarding no more than 10\% of model weights via only finetuning model
connections of speech SSL models can achieve better accuracy over standard
weight finetuning on downstream speech processing tasks. More importantly,
S-Router can serve as an all-in-one technique to enable (1) a new
finetuning scheme, (2) an efficient multilingual/multitask solution, (3) a
state-of-the-art ASR pruning technique, and (4) a new tool to quantitatively
analyze the learned speech representation. We believe S-Router has provided
a new perspective for practical deployment of speech SSL models. Our codes are
available at: https://github.com/GATECH-EIC/S3-Router.Comment: Accepted at NeurIPS 202