318 research outputs found

    Learned Perceptual Image Enhancement

    Full text link
    Learning a typical image enhancement pipeline involves minimization of a loss function between enhanced and reference images. While L1 and L2 losses are perhaps the most widely used functions for this purpose, they do not necessarily lead to perceptually compelling results. In this paper, we show that adding a learned no-reference image quality metric to the loss can significantly improve enhancement operators. This metric is implemented using a CNN (convolutional neural network) trained on a large-scale dataset labelled with aesthetic preferences of human raters. This loss allows us to conveniently perform back-propagation in our learning framework to simultaneously optimize for similarity to a given ground truth reference and perceptual quality. This perceptual loss is only used to train parameters of image processing operators, and does not impose any extra complexity at inference time. Our experiments demonstrate that this loss can be effective for tuning a variety of operators such as local tone mapping and dehazing

    Convergence of algorithms for reconstructing convex bodies and directional measures

    Get PDF
    We investigate algorithms for reconstructing a convex body KK in Rn\mathbb {R}^n from noisy measurements of its support function or its brightness function in kk directions u1,...,uku_1,...,u_k. The key idea of these algorithms is to construct a convex polytope PkP_k whose support function (or brightness function) best approximates the given measurements in the directions u1,...,uku_1,...,u_k (in the least squares sense). The measurement errors are assumed to be stochastically independent and Gaussian. It is shown that this procedure is (strongly) consistent, meaning that, almost surely, PkP_k tends to KK in the Hausdorff metric as kβ†’βˆžk\to\infty. Here some mild assumptions on the sequence (ui)(u_i) of directions are needed. Using results from the theory of empirical processes, estimates of rates of convergence are derived, which are first obtained in the L2L_2 metric and then transferred to the Hausdorff metric. Along the way, a new estimate is obtained for the metric entropy of the class of origin-symmetric zonoids contained in the unit ball. Similar results are obtained for the convergence of an algorithm that reconstructs an approximating measure to the directional measure of a stationary fiber process from noisy measurements of its rose of intersections in kk directions u1,...,uku_1,...,u_k. Here the Dudley and Prohorov metrics are used. The methods are linked to those employed for the support and brightness function algorithms via the fact that the rose of intersections is the support function of a projection body.Comment: Published at http://dx.doi.org/10.1214/009053606000000335 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration

    Full text link
    Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration that avoids the so-called ``regression to the mean'' effect and produces more realistic and detailed images than existing regression-based methods. It does this by gradually improving image quality in small steps, similar to generative denoising diffusion models. Image restoration is an ill-posed problem where multiple high-quality images are plausible reconstructions of a given low-quality input. Therefore, the outcome of a single step regression model is typically an aggregate of all possible explanations, therefore lacking details and realism. The main advantage of InDI is that it does not try to predict the clean target image in a single step but instead gradually improves the image in small steps, resulting in better perceptual quality. While generative denoising diffusion models also work in small steps, our formulation is distinct in that it does not require knowledge of any analytic form of the degradation process. Instead, we directly learn an iterative restoration process from low-quality and high-quality paired examples. InDI can be applied to virtually any image degradation, given paired training data. In conditional denoising diffusion image restoration the denoising network generates the restored image by repeatedly denoising an initial image of pure noise, conditioned on the degraded input. Contrary to conditional denoising formulations, InDI directly proceeds by iteratively restoring the input low-quality image, producing high-quality results on a variety of image restoration tasks, including motion and out-of-focus deblurring, super-resolution, compression artifact removal, and denoising
    • …
    corecore