899 research outputs found
Gaining Insights into Denoising by Inpainting
The filling-in effect of diffusion processes is a powerful tool for various
image analysis tasks such as inpainting-based compression and dense optic flow
computation. For noisy data, an interesting side effect occurs: The
interpolated data have higher confidence, since they average information from
many noisy sources. This observation forms the basis of our denoising by
inpainting (DbI) framework. It averages multiple inpainting results from
different noisy subsets. Our goal is to obtain fundamental insights into key
properties of DbI and its connections to existing methods. Like in
inpainting-based image compression, we choose homogeneous diffusion as a very
simple inpainting operator that performs well for highly optimized data. We
propose several strategies to choose the location of the selected pixels.
Moreover, to improve the global approximation quality further, we also allow to
change the function values of the noisy pixels. In contrast to traditional
denoising methods that adapt the operator to the data, our approach adapts the
data to the operator. Experimentally we show that replacing homogeneous
diffusion inpainting by biharmonic inpainting does not improve the
reconstruction quality. This again emphasizes the importance of data adaptivity
over operator adaptivity. On the foundational side, we establish deterministic
and probabilistic theories with convergence estimates. In the non-adaptive 1-D
case, we derive equivalence results between DbI on shifted regular grids and
classical homogeneous diffusion filtering via an explicit relation between the
density and the diffusion time
Recommended from our members
Methods for improved mapping of brain lesion connectivity
Recent advances over the past two decades in neuroimaging methods have enabled us to map the connectivity of the brain. In parallel, pathophysiological models of brain disease have shifted from an emphasis on understanding pathology in specific brain regions to characterizing disruptions to interconnected neural networks. Nevertheless, these recent methods for mapping brain connectivity are still under development. Every step of the mapping process becomes a potential source for additional error due to noise or artifacts that could impact final analyses. Segmentation, parcellation, registration, and tractography are some of the steps where this occurs. Moreover, mapping the connectivity in a brain lesion is even more susceptible to errors in these steps. In this body of work, I describe multiple new methods for improving the accuracy of mapping lesion connectivity by reducing errors at the tractography stage which is the most error prone stage. First, we develop an approach for directly normalizing streamlines into a template space that avoids performing tractography in the normalized template space, reducing the error of connectomes constructed in the template space with respect to the ground truth native space connectome. Second, we develop a rapid approach for performing shortest path tractography and constructing shortest path probability weighted connectomes which increases the connection specificity relative to local streamline tracking approaches. We then demonstrate how our shortest path tractography approach can be used construct a disconnectome, a connectivity map of the proportion of connections lost due to intersecting a lesion. We then develop a fast, greedy graph-theoretic algorithm that extracts the maximally disconnected subgraph containing brain regions with the greatest shared loss of connectivity. Finally, we demonstrate how combining methods from diffusion based image inpainting and optimal estimation can be used to restore or inpaint corrupted fiber diffusion models in lesioned white matter tissue, enabling tractography and the study of lesion connectivity and modeling of microstructural measures in the patient’s native space
A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging
Conventional LIDAR systems require hundreds or thousands of photon detections
to form accurate depth and reflectivity images. Recent photon-efficient
computational imaging methods are remarkably effective with only 1.0 to 3.0
detected photons per pixel, but they are not demonstrated at
signal-to-background ratio (SBR) below 1.0 because their imaging accuracies
degrade significantly in the presence of high background noise. We introduce a
new approach to depth and reflectivity estimation that focuses on unmixing
contributions from signal and noise sources. At each pixel in an image,
short-duration range gates are adaptively determined and applied to remove
detections likely to be due to noise. For pixels with too few detections to
perform this censoring accurately, we borrow data from neighboring pixels to
improve depth estimates, where the neighborhood formation is also adaptive to
scene content. Algorithm performance is demonstrated on experimental data at
varying levels of noise. Results show improved performance of both reflectivity
and depth estimates over state-of-the-art methods, especially at low
signal-to-background ratios. In particular, accurate imaging is demonstrated
with SBR as low as 0.04. This validation of a photon-efficient, noise-tolerant
method demonstrates the viability of rapid, long-range, and low-power LIDAR
imaging
A Survey on Generative Diffusion Model
Deep learning shows excellent potential in generation tasks thanks to deep
latent representation. Generative models are classes of models that can
generate observations randomly concerning certain implied parameters. Recently,
the diffusion Model has become a rising class of generative models by its
power-generating ability. Nowadays, great achievements have been reached. More
applications except for computer vision, speech generation, bioinformatics, and
natural language processing are to be explored in this field. However, the
diffusion model has its genuine drawback of a slow generation process, single
data types, low likelihood, and the inability for dimension reduction. They are
leading to many enhanced works. This survey makes a summary of the field of the
diffusion model. We first state the main problem with two landmark works --
DDPM and DSM, and a unified landmark work -- Score SDE. Then, we present
improved techniques for existing problems in the diffusion-based model field,
including speed-up improvement For model speed-up improvement, data structure
diversification, likelihood optimization, and dimension reduction. Regarding
existing models, we also provide a benchmark of FID score, IS, and NLL
according to specific NFE. Moreover, applications with diffusion models are
introduced including computer vision, sequence modeling, audio, and AI for
science. Finally, there is a summarization of this field together with
limitations \& further directions. The summation of existing well-classified
methods is in our
Github:https://github.com/chq1155/A-Survey-on-Generative-Diffusion-Model
- …