898 research outputs found
Gaining Insights into Denoising by Inpainting
The filling-in effect of diffusion processes is a powerful tool for various
image analysis tasks such as inpainting-based compression and dense optic flow
computation. For noisy data, an interesting side effect occurs: The
interpolated data have higher confidence, since they average information from
many noisy sources. This observation forms the basis of our denoising by
inpainting (DbI) framework. It averages multiple inpainting results from
different noisy subsets. Our goal is to obtain fundamental insights into key
properties of DbI and its connections to existing methods. Like in
inpainting-based image compression, we choose homogeneous diffusion as a very
simple inpainting operator that performs well for highly optimized data. We
propose several strategies to choose the location of the selected pixels.
Moreover, to improve the global approximation quality further, we also allow to
change the function values of the noisy pixels. In contrast to traditional
denoising methods that adapt the operator to the data, our approach adapts the
data to the operator. Experimentally we show that replacing homogeneous
diffusion inpainting by biharmonic inpainting does not improve the
reconstruction quality. This again emphasizes the importance of data adaptivity
over operator adaptivity. On the foundational side, we establish deterministic
and probabilistic theories with convergence estimates. In the non-adaptive 1-D
case, we derive equivalence results between DbI on shifted regular grids and
classical homogeneous diffusion filtering via an explicit relation between the
density and the diffusion time
Recommended from our members
Methods for improved mapping of brain lesion connectivity
Recent advances over the past two decades in neuroimaging methods have enabled us to map the connectivity of the brain. In parallel, pathophysiological models of brain disease have shifted from an emphasis on understanding pathology in specific brain regions to characterizing disruptions to interconnected neural networks. Nevertheless, these recent methods for mapping brain connectivity are still under development. Every step of the mapping process becomes a potential source for additional error due to noise or artifacts that could impact final analyses. Segmentation, parcellation, registration, and tractography are some of the steps where this occurs. Moreover, mapping the connectivity in a brain lesion is even more susceptible to errors in these steps. In this body of work, I describe multiple new methods for improving the accuracy of mapping lesion connectivity by reducing errors at the tractography stage which is the most error prone stage. First, we develop an approach for directly normalizing streamlines into a template space that avoids performing tractography in the normalized template space, reducing the error of connectomes constructed in the template space with respect to the ground truth native space connectome. Second, we develop a rapid approach for performing shortest path tractography and constructing shortest path probability weighted connectomes which increases the connection specificity relative to local streamline tracking approaches. We then demonstrate how our shortest path tractography approach can be used construct a disconnectome, a connectivity map of the proportion of connections lost due to intersecting a lesion. We then develop a fast, greedy graph-theoretic algorithm that extracts the maximally disconnected subgraph containing brain regions with the greatest shared loss of connectivity. Finally, we demonstrate how combining methods from diffusion based image inpainting and optimal estimation can be used to restore or inpaint corrupted fiber diffusion models in lesioned white matter tissue, enabling tractography and the study of lesion connectivity and modeling of microstructural measures in the patient’s native space
A Survey on Generative Diffusion Model
Deep learning shows excellent potential in generation tasks thanks to deep
latent representation. Generative models are classes of models that can
generate observations randomly concerning certain implied parameters. Recently,
the diffusion Model has become a rising class of generative models by its
power-generating ability. Nowadays, great achievements have been reached. More
applications except for computer vision, speech generation, bioinformatics, and
natural language processing are to be explored in this field. However, the
diffusion model has its genuine drawback of a slow generation process, single
data types, low likelihood, and the inability for dimension reduction. They are
leading to many enhanced works. This survey makes a summary of the field of the
diffusion model. We first state the main problem with two landmark works --
DDPM and DSM, and a unified landmark work -- Score SDE. Then, we present
improved techniques for existing problems in the diffusion-based model field,
including speed-up improvement For model speed-up improvement, data structure
diversification, likelihood optimization, and dimension reduction. Regarding
existing models, we also provide a benchmark of FID score, IS, and NLL
according to specific NFE. Moreover, applications with diffusion models are
introduced including computer vision, sequence modeling, audio, and AI for
science. Finally, there is a summarization of this field together with
limitations \& further directions. The summation of existing well-classified
methods is in our
Github:https://github.com/chq1155/A-Survey-on-Generative-Diffusion-Model
Global Structure-Aware Diffusion Process for Low-Light Image Enhancement
This paper studies a diffusion-based framework to address the low-light image
enhancement problem. To harness the capabilities of diffusion models, we delve
into this intricate process and advocate for the regularization of its inherent
ODE-trajectory. To be specific, inspired by the recent research that low
curvature ODE-trajectory results in a stable and effective diffusion process,
we formulate a curvature regularization term anchored in the intrinsic
non-local structures of image data, i.e., global structure-aware
regularization, which gradually facilitates the preservation of complicated
details and the augmentation of contrast during the diffusion process. This
incorporation mitigates the adverse effects of noise and artifacts resulting
from the diffusion process, leading to a more precise and flexible enhancement.
To additionally promote learning in challenging regions, we introduce an
uncertainty-guided regularization technique, which wisely relaxes constraints
on the most extreme regions of the image. Experimental evaluations reveal that
the proposed diffusion-based framework, complemented by rank-informed
regularization, attains distinguished performance in low-light enhancement. The
outcomes indicate substantial advancements in image quality, noise suppression,
and contrast amplification in comparison with state-of-the-art methods. We
believe this innovative approach will stimulate further exploration and
advancement in low-light image processing, with potential implications for
other applications of diffusion models. The code is publicly available at
https://github.com/jinnh/GSAD.Comment: Accepted to NeurIPS 202
- …