103 research outputs found

    Robust Multi-Image HDR Reconstruction for the Modulo Camera

    Full text link
    Photographing scenes with high dynamic range (HDR) poses great challenges to consumer cameras with their limited sensor bit depth. To address this, Zhao et al. recently proposed a novel sensor concept - the modulo camera - which captures the least significant bits of the recorded scene instead of going into saturation. Similar to conventional pipelines, HDR images can be reconstructed from multiple exposures, but significantly fewer images are needed than with a typical saturating sensor. While the concept is appealing, we show that the original reconstruction approach assumes noise-free measurements and quickly breaks down otherwise. To address this, we propose a novel reconstruction algorithm that is robust to image noise and produces significantly fewer artifacts. We theoretically analyze correctness as well as limitations, and show that our approach significantly outperforms the baseline on real data.Comment: to appear at the 39th German Conference on Pattern Recognition (GCPR) 201

    Improving Blind Spot Denoising for Microscopy

    Full text link
    Many microscopy applications are limited by the total amount of usable light and are consequently challenged by the resulting levels of noise in the acquired images. This problem is often addressed via (supervised) deep learning based denoising. Recently, by making assumptions about the noise statistics, self-supervised methods have emerged. Such methods are trained directly on the images that are to be denoised and do not require additional paired training data. While achieving remarkable results, self-supervised methods can produce high-frequency artifacts and achieve inferior results compared to supervised approaches. Here we present a novel way to improve the quality of self-supervised denoising. Considering that light microscopy images are usually diffraction-limited, we propose to include this knowledge in the denoising process. We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network. As a consequence, we are able to eliminate high-frequency artifacts and achieve self-supervised results that are very close to the ones achieved with traditional supervised methods.Comment: 15 pages, 4 figure

    Deep Burst Denoising

    Full text link
    Noise is an inherent issue of low-light image capture, one which is exacerbated on mobile devices due to their narrow apertures and small sensors. One strategy for mitigating noise in a low-light situation is to increase the shutter time of the camera, thus allowing each photosite to integrate more light and decrease noise variance. However, there are two downsides of long exposures: (a) bright regions can exceed the sensor range, and (b) camera and scene motion will result in blurred images. Another way of gathering more light is to capture multiple short (thus noisy) frames in a "burst" and intelligently integrate the content, thus avoiding the above downsides. In this paper, we use the burst-capture strategy and implement the intelligent integration via a recurrent fully convolutional deep neural net (CNN). We build our novel, multiframe architecture to be a simple addition to any single frame denoising model, and design to handle an arbitrary number of noisy input frames. We show that it achieves state of the art denoising results on our burst dataset, improving on the best published multi-frame techniques, such as VBM4D and FlexISP. Finally, we explore other applications of image enhancement by integrating content from multiple frames and demonstrate that our DNN architecture generalizes well to image super-resolution

    Practical Poissonian-Gaussian Noise Modeling and Fitting for Single-Image Raw-Data

    Full text link

    Feed-forward neural network as nonlinear dynamics integrator for supercontinuum generation

    Get PDF
    The nonlinear propagation of ultrashort pulses in optical fibers depends sensitively on the input pulse and fiber parameters. As a result, the optimization of propagation for specific applications generally requires time-consuming simulations based on the sequential integration of the generalized nonlinear Schrödinger equation (GNLSE). Here, we train a feed-forward neural network to learn the differential propagation dynamics of the GNLSE, allowing emulation of direct numerical integration of fiber propagation, and particularly the highly complex case of supercontinuum generation. Comparison with a recurrent neural network shows that the feed-forward approach yields faster training and computation, and reduced memory requirements. The approach is generic and can be extended to other physical systems.acceptedVersionPeer reviewe

    W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping

    Full text link
    In fluorescence microscopy live-cell imaging, there is a critical trade-off between the signal-to-noise ratio and spatial resolution on one side, and the integrity of the biological sample on the other side. To obtain clean high-resolution (HR) images, one can either use microscopy techniques, such as structured-illumination microscopy (SIM), or apply denoising and super-resolution (SR) algorithms. However, the former option requires multiple shots that can damage the samples, and although efficient deep learning based algorithms exist for the latter option, no benchmark exists to evaluate these algorithms on the joint denoising and SR (JDSR) tasks. To study JDSR on microscopy data, we propose such a novel JDSR dataset, Widefield2SIM (W2S), acquired using a conventional fluorescence widefield and SIM imaging. W2S includes 144,000 real fluorescence microscopy images, resulting in a total of 360 sets of images. A set is comprised of noisy low-resolution (LR) widefield images with different noise levels, a noise-free LR image, and a corresponding high-quality HR SIM image. W2S allows us to benchmark the combinations of 6 denoising methods and 6 SR methods. We show that state-of-the-art SR networks perform very poorly on noisy inputs. Our evaluation also reveals that applying the best denoiser in terms of reconstruction error followed by the best SR method does not necessarily yield the best final result. Both quantitative and qualitative results show that SR networks are sensitive to noise and the sequential application of denoising and SR algorithms is sub-optimal. Lastly, we demonstrate that SR networks retrained end-to-end for JDSR outperform any combination of state-of-the-art deep denoising and SR networksComment: ECCVW 2020. Project page: \<https://github.com/ivrl/w2s

    Noise modeling and variance stabilization of a computed radiography (CR) mammography system subject to fixed-pattern noise

    Get PDF
    In this work we model the noise properties of a computed radiography (CR) mammography system by adding an extra degree of freedom to a well-established noise model, and derive a variance-stabilizing transform (VST) to convert the signal-dependent noise into approximately signal-independent. The proposed model relies on a quadratic variance function, which considers fixed-pattern (structural), quantum and electronic noise. It also accounts for the spatial-dependency of the noise by assuming a space-variant quantum coefficient. The proposed noise model was compared against two alternative models commonly found in the literature. The first alternative model ignores the spatial-variability of the quantum noise, and the second model assumes negligible structural noise. We also derive a VST to convert noisy observations contaminated by the proposed noise model into observations with approximately Gaussian noise and constant variance equals to one. Finally, we estimated a look-up table that can be used as an inverse transform in denoising applications. A phantom study was conducted to validate the noise model, VST and inverse VST. The results show that the space-variant signal-dependent quadratic noise model is appropriate to describe noise in this CR mammography system (errors< 2.0% in terms of signal-to-noise ratio). The two alternative noise models were outperformed by the proposed model (errors as high as 14.7% and 9.4%). The designed VST was able to stabilize the noise so that it has variance approximately equal to one (errors< 4.1%), while the two alternative models achieved errors as high as 26.9% and 18.0%, respectively. Finally, the proposed inverse transform was capable of returning the signal to the original signal range with virtually no bias.acceptedVersionPeer reviewe

    A multi-wavelength polarimetric study of the blazar CTA 102 during a Gamma-ray flare in 2012

    Full text link
    We perform a multi-wavelength polarimetric study of the quasar CTA 102 during an extraordinarily bright Îł\gamma-ray outburst detected by the {\it Fermi} Large Area Telescope in September-October 2012 when the source reached a flux of F>100 MeV=5.2±0.4×10−6_{>100~\mathrm{MeV}} =5.2\pm0.4\times10^{-6} photons cm−2^{-2} s−1^{-1}. At the same time the source displayed an unprecedented optical and NIR outburst. We study the evolution of the parsec scale jet with ultra-high angular resolution through a sequence of 80 total and polarized intensity Very Long Baseline Array images at 43 GHz, covering the observing period from June 2007 to June 2014. We find that the Îł\gamma-ray outburst is coincident with flares at all the other frequencies and is related to the passage of a new superluminal knot through the radio core. The powerful Îł\gamma-ray emission is associated with a change in direction of the jet, which became oriented more closely to our line of sight (Ξ∌\theta\sim1.2∘^{\circ}) during the ejection of the knot and the Îł\gamma-ray outburst. During the flare, the optical polarized emission displays intra-day variability and a clear clockwise rotation of EVPAs, which we associate with the path followed by the knot as it moves along helical magnetic field lines, although a random walk of the EVPA caused by a turbulent magnetic field cannot be ruled out. We locate the Îł\gamma-ray outburst a short distance downstream of the radio core, parsecs from the black hole. This suggests that synchrotron self-Compton scattering of near-infrared to ultraviolet photons is the probable mechanism for the Îł\gamma-ray production.Comment: Accepted for publication in The Astrophysical Journa

    End-to-end Interpretable Learning of Non-blind Image Deblurring

    Get PDF
    Non-blind image deblurring is typically formulated as a linear least-squares problem regularized by natural priors on the corresponding sharp picture's gradients, which can be solved, for example, using a half-quadratic splitting method with Richardson fixed-point iterations for its least-squares updates and a proximal operator for the auxiliary variable updates. We propose to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels. Using convolutions instead of a generic linear preconditioner allows extremely efficient parameter sharing across the image, and leads to significant gains in accuracy and/or speed compared to classical FFT and conjugate-gradient methods. More importantly, the proposed architecture is easily adapted to learning both the preconditioner and the proximal operator using CNN embeddings. This yields a simple and efficient algorithm for non-blind image deblurring which is fully interpretable, can be learned end to end, and whose accuracy matches or exceeds the state of the art, quite significantly, in the non-uniform case.Comment: Accepted at ECCV2020 (poster
    • 

    corecore