45 research outputs found

    Some proximal methods for Poisson intensity CBCT and PET

    No full text
    International audienceCone-Beam Computerized Tomography (CBCT) and Positron Emission Tomography (PET) are two complementary medical imaging modalities providing respectively anatomic and metabolic information on a patient. In the context of public health, one must address the problem of dose reduction of the potentially harmful quantities related to each exam protocol : X-rays for CBCT and radiotracer for PET. Two demonstrators based on a technological breakthrough (acquisition devices work in photon-counting mode) have been developed. It turns out that in this low-dose context, i.e. for low intensity signals acquired by photon counting devices, noise should not be approximated anymore by a Gaussian distribution, but is following a Poisson distribution. We investigate in this paper the two related tomographic reconstruction problems. We formulate separately the CBCT and the PET problems in two general frameworks that encompass the physics of the acquisition devices and the specific discretization of the object to reconstruct. We propose various fast numerical schemes based on proximal methods to compute the solution of each problem. In particular, we show that primal-dual approaches are well suited in the PET case when considering non differentiable regularizations such as Total Variation. Experiments on numerical simulations and real data are in favor of the proposed algorithms when compared with well-established methods

    Depth Estimation and Image Restoration by Deep Learning from Defocused Images

    Full text link
    Monocular depth estimation and image deblurring are two fundamental tasks in computer vision, given their crucial role in understanding 3D scenes. Performing any of them by relying on a single image is an ill-posed problem. The recent advances in the field of Deep Convolutional Neural Networks (DNNs) have revolutionized many tasks in computer vision, including depth estimation and image deblurring. When it comes to using defocused images, the depth estimation and the recovery of the All-in-Focus (Aif) image become related problems due to defocus physics. Despite this, most of the existing models treat them separately. There are, however, recent models that solve these problems simultaneously by concatenating two networks in a sequence to first estimate the depth or defocus map and then reconstruct the focused image based on it. We propose a DNN that solves the depth estimation and image deblurring in parallel. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) extends a conventional Depth from Defocus (DFD) networks with a deblurring branch that shares the same encoder as the depth branch. The proposed method has been successfully tested on two benchmarks, one for indoor and the other for outdoor scenes: NYU-v2 and Make3D. Extensive experiments with 2HDED:NET on these benchmarks have demonstrated superior or close performances to those of the state-of-the-art models for depth estimation and image deblurring

    A new proximal method for joint image restoration and edge detection with the Mumford-Shah model

    Get PDF
    International audienceIn this paper, we propose an adaptation of the PAM algorithm to the minimization of a nonconvex functional designed for joint image denoising and contour detection. This new functional is based on the Ambrosio–Tortorelli approximation of the well-known Mumford–Shah functional. We motivate the proposed approximation, offering flexibility in the choice of the possibly non-smooth penalization, and we derive closed form expression for the proximal steps involved in the algorithm. We focus our attention on two types of penalization: 1-norm and a proposed quadratic-1 function. Numerical experiments show that the proposed method is able to detect sharp contours and to reconstruct piecewise smooth approximations with low computational cost and convergence guarantees. We also compare the results with state-of-the-art re-laxations of the Mumford–Shah functional and a recent discrete formulation of the Ambrosio–Tortorelli functional

    Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum

    Full text link
    This paper applies an idea of adaptive momentum for the nonlinear conjugate gradient to accelerate optimization problems in sparse recovery. Specifically, we consider two types of minimization problems: a (single) differentiable function and the sum of a non-smooth function and a differentiable function. In the first case, we adopt a fixed step size to avoid the traditional line search and establish the convergence analysis of the proposed algorithm for a quadratic problem. This acceleration is further incorporated with an operator splitting technique to deal with the non-smooth function in the second case. We use the convex ℓ1\ell_1 and the nonconvex ℓ1−ℓ2\ell_1-\ell_2 functionals as two case studies to demonstrate the efficiency of the proposed approaches over traditional methods

    Generating structured non-smooth priors and associated primal-dual methods

    Get PDF
    The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided
    corecore