181,750 research outputs found
A graph-based mathematical morphology reader
This survey paper aims at providing a "literary" anthology of mathematical
morphology on graphs. It describes in the English language many ideas stemming
from a large number of different papers, hence providing a unified view of an
active and diverse field of research
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
A Convex Model for Edge-Histogram Specification with Applications to Edge-preserving Smoothing
The goal of edge-histogram specification is to find an image whose edge image
has a histogram that matches a given edge-histogram as much as possible.
Mignotte has proposed a non-convex model for the problem [M. Mignotte. An
energy-based model for the image edge-histogram specification problem. IEEE
Transactions on Image Processing, 21(1):379--386, 2012]. In his work, edge
magnitudes of an input image are first modified by histogram specification to
match the given edge-histogram. Then, a non-convex model is minimized to find
an output image whose edge-histogram matches the modified edge-histogram. The
non-convexity of the model hinders the computations and the inclusion of useful
constraints such as the dynamic range constraint. In this paper, instead of
considering edge magnitudes, we directly consider the image gradients and
propose a convex model based on them. Furthermore, we include additional
constraints in our model based on different applications. The convexity of our
model allows us to compute the output image efficiently using either
Alternating Direction Method of Multipliers or Fast Iterative
Shrinkage-Thresholding Algorithm. We consider several applications in
edge-preserving smoothing including image abstraction, edge extraction, details
exaggeration, and documents scan-through removal. Numerical results are given
to illustrate that our method successfully produces decent results efficiently
Rethinking the Pipeline of Demosaicing, Denoising and Super-Resolution
Incomplete color sampling, noise degradation, and limited resolution are the
three key problems that are unavoidable in modern camera systems. Demosaicing
(DM), denoising (DN), and super-resolution (SR) are core components in a
digital image processing pipeline to overcome the three problems above,
respectively. Although each of these problems has been studied actively, the
mixture problem of DM, DN, and SR, which is a higher practical value, lacks
enough attention. Such a mixture problem is usually solved by a sequential
solution (applying each method independently in a fixed order: DM DN
SR), or is simply tackled by an end-to-end network without enough
analysis into interactions among tasks, resulting in an undesired performance
drop in the final image quality. In this paper, we rethink the mixture problem
from a holistic perspective and propose a new image processing pipeline: DN
SR DM. Extensive experiments show that simply modifying the usual
sequential solution by leveraging our proposed pipeline could enhance the image
quality by a large margin. We further adopt the proposed pipeline into an
end-to-end network, and present Trinity Enhancement Network (TENet).
Quantitative and qualitative experiments demonstrate the superiority of our
TENet to the state-of-the-art. Besides, we notice the literature lacks a full
color sampled dataset. To this end, we contribute a new high-quality full color
sampled real-world dataset, namely PixelShift200. Our experiments show the
benefit of the proposed PixelShift200 dataset for raw image processing.Comment: Code is available at: https://github.com/guochengqian/TENe
- …