Enhancing speech signal quality in adverse acoustic environments is a
persistent challenge in speech processing. Existing deep learning based
enhancement methods often struggle to effectively remove background noise and
reverberation in real-world scenarios, hampering listening experiences. To
address these challenges, we propose a novel approach that uses pre-trained
generative methods to resynthesize clean, anechoic speech from degraded inputs.
This study leverages pre-trained vocoder or codec models to synthesize
high-quality speech while enhancing robustness in challenging scenarios.
Generative methods effectively handle information loss in speech signals,
resulting in regenerated speech that has improved fidelity and reduced
artifacts. By harnessing the capabilities of pre-trained models, we achieve
faithful reproduction of the original speech in adverse conditions.
Experimental evaluations on both simulated datasets and realistic samples
demonstrate the effectiveness and robustness of our proposed methods.
Especially by leveraging codec, we achieve superior subjective scores for both
simulated and realistic recordings. The generated speech exhibits enhanced
audio quality, reduced background noise, and reverberation. Our findings
highlight the potential of pre-trained generative techniques in speech
processing, particularly in scenarios where traditional methods falter. Demos
are available at https://whmrtm.github.io/SoundResynthesis.Comment: Paper in submissio