108 research outputs found

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    On the 3D electromagnetic quantitative inverse scattering problem: algorithms and regularization

    Get PDF
    In this thesis, 3D quantitative microwave imaging algorithms are developed with emphasis on efficiency of the algorithms and quality of the reconstruction. First, a fast simulation tool has been implemented which makes use of a volume integral equation (VIE) to solve the forward scattering problem. The solution of the resulting linear system is done iteratively. To do this efficiently, two strategies are combined. First, the matrix-vector multiplications needed in every step of the iterative solution are accelerated using a combination of the Fast Fourier Transform (FFT) method and the Multilevel Fast Multipole Algorithm (MLFMA). It is shown that this hybridMLFMA-FFT method is most suited for large, sparse scattering problems. Secondly, the number of iterations is reduced by using an extrapolation technique to determine suitable initial guesses, which are already close to the solution. This technique combines a marching-on-in-source-position scheme with a linear extrapolation over the permittivity under the form of a Born approximation. It is shown that this forward simulator indeed exhibits a better efficiency. The fast forward simulator is incorporated in an optimization technique which minimizes the discrepancy between measured data and simulated data by adjusting the permittivity profile. A Gauss-Newton optimization method with line search is employed in this dissertation to minimize a least squares data fit cost function with additional regularization. Two different regularization methods were developed in this research. The first regularization method penalizes strong fluctuations in the permittivity by imposing a smoothing constraint, which is a widely used approach in inverse scattering. However, in this thesis, this constraint is incorporated in a multiplicative way instead of in the usual additive way, i.e. its weight in the cost function is reduced with an improving data fit. The second regularization method is Value Picking regularization, which is a new method proposed in this dissertation. This regularization is designed to reconstruct piecewise homogeneous permittivity profiles. Such profiles are hard to reconstruct since sharp interfaces between different permittivity regions have to be preserved, while other strong fluctuations need to be suppressed. Instead of operating on the spatial distribution of the permittivity, as certain existing methods for edge preservation do, it imposes the restriction that only a few different permittivity values should appear in the reconstruction. The permittivity values just mentioned do not have to be known in advance, however, and their number is also updated in a stepwise relaxed VP (SRVP) regularization scheme. Both regularization techniques have been incorporated in the Gauss-Newton optimization framework and yield significantly improved reconstruction quality. The efficiency of the minimization algorithm can also be improved. In every step of the iterative optimization, a linear Gauss-Newton update system has to be solved. This typically is a large system and therefore is solved iteratively. However, these systems are ill-conditioned as a result of the ill-posedness of the inverse scattering problem. Fortunately, the aforementioned regularization techniques allow for the use of a subspace preconditioned LSQR method to solve these systems efficiently, as is shown in this thesis. Finally, the incorporation of constraints on the permittivity through a modified line search path, helps to keep the forward problem well-posed and thus the number of forward iterations low. Another contribution of this thesis is the proposal of a new Consistency Inversion (CI) algorithm. It is based on the same principles as another well known reconstruction algorithm, the Contrast Source Inversion (CSI) method, which considers the contrast currents – equivalent currents that generate a field identical to the scattered field – as fundamental unknowns together with the permittivity. In the CI method, however, the permittivity variables are eliminated from the optimization and are only reconstructed in a final step. This avoids alternating updates of permittivity and contrast currents, which may result in a faster convergence. The CI method has also been supplemented with VP regularization, yielding the VPCI method. The quantitative electromagnetic imaging methods developed in this work have been validated on both synthetic and measured data, for both homogeneous and inhomogeneous objects and yield a high reconstruction quality in all these cases. The successful, completely blind reconstruction of an unknown target from measured data, provided by the Institut Fresnel in Marseille, France, demonstrates at once the validity of the forward scattering code, the performance of the reconstruction algorithm and the quality of the measurements. The reconstruction of a numerical MRI based breast phantom is encouraging for the further development of biomedical microwave imaging and of microwave breast cancer screening in particular

    Applications of nonlinear diffusion in image processing and computer vision

    Get PDF
    Nonlinear diffusion processes can be found in many recent methods for image processing and computer vision. In this article, four applications are surveyed: nonlinear diffusion filtering, variational image regularization, optic flow estimation, and geodesic active contours. For each of these techniques we explain the main ideas, discuss theoretical properties and present an appropriate numerical scheme. The numerical schemes are based on additive operator splittings (AOS). In contrast to traditional multiplicative splittings such as ADI, LOD or D'yakonov splittings, all axes are treated in the same manner, and additional possibilities for efficient realizations on parallel and distributed architectures appear. Geodesic active contours lead to equations that resemble mean curvature motion. For this application, a novel AOS scheme is presented that uses harmonie averaging and does not require reinitializations of the distance function in each iteration step

    Efficient Reconstruction of Piecewise Constant Images Using Nonsmooth Nonconvex Minimization

    Get PDF
    We consider the restoration of piecewise constant images where the number of the regions and their values are not fixed in advance, with a good difference of piecewise constant values between neighboring regions, from noisy data obtained at the output of a linear operator (e.g., a blurring kernel or a Radon transform). Thus we also address the generic problem of unsupervised segmentation in the context of linear inverse problems. The segmentation and the restoration tasks are solved jointly by minimizing an objective function (an energy) composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. The pertinence of such an energy is ensured by the analytical properties of its minimizers. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonsmooth nonconvex minimization. Indeed, the existing methods are unsatisfactory since they (implicitly or explicitly) involve a smooth approximation of the regularization term and often get stuck in shallow local minima. The goal of this paper is to design a method that efficiently handles the nonsmooth nonconvex minimization. More precisely, we propose a continuation method where one tracks the minimizers along a sequence of approximate nonsmooth energies {Jε}, the first of which being strictly convex and the last one the original energy to minimize. Knowing the importance of the nonsmoothness of the regularization term for the segmentation task, each Jε is nonsmooth and is expressed as the sum of an l1 regularization term and a smooth nonconvex function. Furthermore, the local minimization of each Jε is reformulated as the minimization of a smooth function subject to a set of linear constraints. The latter problem is solved by the modified primal-dual interior point method, which guarantees the descent direction at each step. Experimental results are presented and show the effectiveness and the efficiency of the proposed method. Comparison with simulated annealing methods further shows the advantage of our method.published_or_final_versio

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Numerical solution of saddle point problems

    Full text link

    TV-Stokes And Its Variants For Image Processing

    Get PDF
    The total variational minimization with a Stokes constraint, also known as the TV-Stokes model, has been considered as one of the most successful models in image processing, especially in image restoration and sparse-data-based 3D surface reconstruction. This thesis studies the TV-Stokes model and its existing variants, proposes new and more effective variants of the model and their algorithms applied to some of the most interesting image processing problems. We first review some of the variational models that already exist, in particular the TV-Stokes model and its variants. Common techniques like the augmented Lagrangian and the dual formulation, are also introduced. We then present our models as new variants of the TV-Stokes. The main focus of the work has been on the sparse surface reconstruction of 3D surfaces. A model (WTR) with a vector fidelity, that is the gradient vector fidelity, has been proposed, applying it to both 3D cartoon design and height map reconstruction. The model employs the second-order total variation minimization, where the curl-free condition is satisfied automatically. Because the model couples both the height and the gradient vector representing the surface in the same minimization, it constructs the surface correctly. A variant of this model is then introduced, which includes a vector matching term. This matching term gives the model capability to accurately represent the shape of a geometry in the reconstruction. Experiments show a significant improvement over the state-of-the-art models, such as the TV model, higher order TV models, and the anisotropic third-order regularization model, when applied to some general applications. In another work, the thesis generalizes the TV-Stokes model from two dimensions to an arbitrary number of dimensions, introducing a convenient form for the constraint in order it to be extended to higher dimensions. The thesis explores also the idea of feature accumulation through iterative regularization in another work, introducing a Richardson-like iteration for the TV-Stokes. Thisis then followed by a more general model, a combined model, based on the modified variant of the TV-stokes. The resulting model is found to be equivalent to the well-known TGV model. The thesis introduces some interesting numerical strategies for the solution of the TV-Stokes model and its variants. Higher order PDEs are turned into inhomogeneous modified Helmholtz equations through transformations. These equations are then solved using the preconditioned conjugate gradients method or the fast Fourier transformation. The thesis proposes a simple but quite general approach to finding closed form solutions to a general L1 minimization problem, and applies it to design algorithms for our models.Doktorgradsavhandlin

    Compressed Optical Imaging

    Get PDF
    We address the resolution of inverse problems where visual data must be recovered from incomplete information optically acquired in the spatial domain. The optical acquisition models that are involved share a common mathematical structure consisting of a linear operator followed by optional pointwise nonlinearities. The linear operator generally includes lowpass filtering effects and, in some cases, downsampling. Both tend to make the problems ill-posed. Our general resolution strategy is to rely on variational principles, which allows for a tight control on the objective or perceptual quality of the reconstructed data. The three related problems that we investigate and propose to solve are 1. The reconstruction of images from sparse samples. Following a non-ideal acquisition framework, the measurements take the form of spatial-domain samples whose locations are specified a priori. The reconstruction algorithm that we propose is linked to PDE flows with tensor-valued diffusivities. We demonstrate through several experiments that our approach preserves finer visual features than standard interpolation techniques do, especially at very low sampling rates. 2. The reconstruction of images from binary measurements. The acquisition model that we consider relies on optical principles and fits in a compressed-sensing framework. We develop a reconstruction algorithm that allows us to recover grayscale images from the available binary data. It substantially improves upon the state of the art in terms of quality and computational performance. Our overall approach is physically relevant; moreover, it can handle large amounts of data efficiently. 3. The reconstruction of phase and amplitude profiles from single digital holographic acquisitions. Unlike conventional approaches that are based on demodulation, our iterative reconstruction method is able to accurately recover the original object from a single downsampled intensity hologram, as shown in simulated and real measurement settings. It also consistently outperforms the state of the art in terms of signal-to-noise ratio and with respect to the size of the field of view. The common goal of the proposed reconstruction methods is to yield an accurate estimate of the original data from all available measurements. In accordance with the forward model, they are typically capable of handling samples that are sparse in the spatial domain and/or distorted due to pointwise nonlinear effects, as demonstrated in our experiments
    corecore