Unified image restoration and enhancement: Degradation calibrated cycle reconstruction diffusion model

Abstract

Image restoration and enhancement are pivotal for numerous computer vision appli-cations, yet unifying these tasks efficiently remains a significant challenge. Inspiredby the iterative refinement capabilities of diffusion models, we propose CycleRDM, anovel framework designed to unify restoration and enhancement tasks while achiev-ing high-quality mapping. Specifically, CycleRDM first learns the mapping relation-ships among the degraded domain, the rough normal domain, and the normal domainthrough a two-stage diffusion inference process. Subsequently, we transfer the finalcalibration process to the wavelet low-frequency domain using discrete wavelet trans-form, performing fine-grained calibration from a frequency domain perspective byleveraging task-specific frequency spaces. To improve restoration quality, we designa feature gain module for the decomposed wavelet high-frequency domain to elim-inate redundant features. Additionally, we employ multimodal textual prompts andFourier transform to drive stable denoising and reduce randomness during the infer-ence process. After extensive validation, CycleRDM can be effectively generalizedto a wide range of image restoration and enhancement tasks while requiring only asmall number of training samples to be significantly superior on various benchmarksof reconstruction quality and perceptual quality. The source code will be available athttps://github.com/hejh8/CycleRDM

    Similar works

    Full text

    thumbnail-image

    University of Salford Institutional Repository

    redirect
    Last time updated on 24/07/2025

    Having an issue?

    Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.