1,529 research outputs found

    End-to-end Interpretable Learning of Non-blind Image Deblurring

    Get PDF
    Non-blind image deblurring is typically formulated as a linear least-squares problem regularized by natural priors on the corresponding sharp picture's gradients, which can be solved, for example, using a half-quadratic splitting method with Richardson fixed-point iterations for its least-squares updates and a proximal operator for the auxiliary variable updates. We propose to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels. Using convolutions instead of a generic linear preconditioner allows extremely efficient parameter sharing across the image, and leads to significant gains in accuracy and/or speed compared to classical FFT and conjugate-gradient methods. More importantly, the proposed architecture is easily adapted to learning both the preconditioner and the proximal operator using CNN embeddings. This yields a simple and efficient algorithm for non-blind image deblurring which is fully interpretable, can be learned end to end, and whose accuracy matches or exceeds the state of the art, quite significantly, in the non-uniform case.Comment: Accepted at ECCV2020 (poster

    Sloan Digital Sky Survey III Photometric Quasar Clustering: Probing the Initial Conditions of the Universe using the Largest Volume

    Full text link
    The Sloan Digital Sky Survey has surveyed 14,555 square degrees of the sky, and delivered over a trillion pixels of imaging data. We present the large-scale clustering of 1.6 million quasars between z = 0.5 and z = 2.5 that have been classified from this imaging, representing the highest density of quasars ever studied for clustering measurements. This data set spans ~11,000 square degrees and probes a volume of 80(Gpc/h)^3. In principle, such a large volume and medium density of tracers should facilitate high-precision cosmological constraints. We measure the angular clustering of photometrically classified quasars using an optimal quadratic estimator in four redshift slices with an accuracy of ~25% over a bin width of l ~10 - 15 on scales corresponding to matter-radiation equality and larger (l ~ 2 - 30). Observational systematics can strongly bias clustering measurements on large scales, which can mimic cosmologically relevant signals such as deviations from Gaussianity in the spectrum of primordial perturbations. We account for systematics by employing a new method recently proposed by Agarwal et al. (2014) to the clustering of photometrically classified quasars. We carefully apply our methodology to mitigate known observational systematics and further remove angular bins that are contaminated by unknown systematics. Combining quasar data with the photometric luminous red galaxy (LRG) sample of Ross et al. (2011) and Ho et al. (2012), and marginalizing over all bias and shot noise-like parameters, we obtain a constraint on local primordial non-Gaussianity of fNL = -113+/-154 (1\sigma error). [Abridged]Comment: 35 pages, 15 figure

    Multiframe visual-inertial blur estimation and removal for unmodified smartphones

    Get PDF
    Pictures and videos taken with smartphone cameras often suffer from motion blur due to handshake during the exposure time. Recovering a sharp frame from a blurry one is an ill-posed problem but in smartphone applications additional cues can aid the solution. We propose a blur removal algorithm that exploits information from subsequent camera frames and the built-in inertial sensors of an unmodified smartphone. We extend the fast non-blind uniform blur removal algorithm of Krishnan and Fergus to non-uniform blur and to multiple input frames. We estimate piecewise uniform blur kernels from the gyroscope measurements of the smartphone and we adaptively steer our multiframe deconvolution framework towards the sharpest input patches. We show in qualitative experiments that our algorithm can remove synthetic and real blur from individual frames of a degraded image sequence within a few seconds

    Towards Deep Unsupervised SAR Despeckling with Blind-Spot Convolutional Neural Networks

    Get PDF
    SAR despeckling is a problem of paramount importance in remote sensing, since it represents the first step of many scene analysis algorithms. Recently, deep learning techniques have outperformed classical model-based despeckling algorithms. However, such methods require clean ground truth images for training, thus resorting to synthetically speckled optical images since clean SAR images cannot be acquired. In this paper, inspired by recent works on blind-spot denoising networks, we propose a self-supervised Bayesian despeckling method. The proposed method is trained employing only noisy images and can therefore learn features of real SAR images rather than synthetic data. We show that the performance of the proposed network is very close to the supervised training approach on synthetic data and competitive on real data
    • …
    corecore