Objective: Despite numerous studies proposed for audio restoration in the
literature, most of them focus on an isolated restoration problem such as
denoising or dereverberation, ignoring other artifacts. Moreover, assuming a
noisy or reverberant environment with limited number of fixed
signal-to-distortion ratio (SDR) levels is a common practice. However,
real-world audio is often corrupted by a blend of artifacts such as
reverberation, sensor noise, and background audio mixture with varying types,
severities, and duration. In this study, we propose a novel approach for blind
restoration of real-world audio signals by Operational Generative Adversarial
Networks (Op-GANs) with temporal and spectral objective metrics to enhance the
quality of restored audio signal regardless of the type and severity of each
artifact corrupting it. Methods: 1D Operational-GANs are used with generative
neuron model optimized for blind restoration of any corrupted audio signal.
Results: The proposed approach has been evaluated extensively over the
benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with
a random blend of artifacts each with a random severity to mimic real-world
audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved,
respectively, which are substantial when compared with the baseline methods.
Significance: This is a pioneer study in blind audio restoration with the
unique capability of direct (time-domain) restoration of real-world audio
whilst achieving an unprecedented level of performance for a wide SDR range and
artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally
effective real-world audio restoration with significantly improved performance.
The source codes and the generated real-world audio datasets are shared
publicly with the research community in a dedicated GitHub repository1