207 research outputs found

    Space adaptive and hierarchical Bayesian variational models for image restoration

    Get PDF
    The main contribution of this thesis is the proposal of novel space-variant regularization or penalty terms motivated by a strong statistical rational. In light of the connection between the classical variational framework and the Bayesian formulation, we will focus on the design of highly flexible priors characterized by a large number of unknown parameters. The latter will be automatically estimated by setting up a hierarchical modeling framework, i.e. introducing informative or non-informative hyperpriors depending on the information at hand on the parameters. More specifically, in the first part of the thesis we will focus on the restoration of natural images, by introducing highly parametrized distribution to model the local behavior of the gradients in the image. The resulting regularizers hold the potential to adapt to the local smoothness, directionality and sparsity in the data. The estimation of the unknown parameters will be addressed by means of non-informative hyperpriors, namely uniform distributions over the parameter domain, thus leading to the classical Maximum Likelihood approach. In the second part of the thesis, we will address the problem of designing suitable penalty terms for the recovery of sparse signals. The space-variance in the proposed penalties, corresponding to a family of informative hyperpriors, namely generalized gamma hyperpriors, will follow directly from the assumption of the independence of the components in the signal. The study of the properties of the resulting energy functionals will thus lead to the introduction of two hybrid algorithms, aimed at combining the strong sparsity promotion characterizing non-convex penalty terms with the desirable guarantees of convex optimization

    TV-Stokes And Its Variants For Image Processing

    Get PDF
    The total variational minimization with a Stokes constraint, also known as the TV-Stokes model, has been considered as one of the most successful models in image processing, especially in image restoration and sparse-data-based 3D surface reconstruction. This thesis studies the TV-Stokes model and its existing variants, proposes new and more effective variants of the model and their algorithms applied to some of the most interesting image processing problems. We first review some of the variational models that already exist, in particular the TV-Stokes model and its variants. Common techniques like the augmented Lagrangian and the dual formulation, are also introduced. We then present our models as new variants of the TV-Stokes. The main focus of the work has been on the sparse surface reconstruction of 3D surfaces. A model (WTR) with a vector fidelity, that is the gradient vector fidelity, has been proposed, applying it to both 3D cartoon design and height map reconstruction. The model employs the second-order total variation minimization, where the curl-free condition is satisfied automatically. Because the model couples both the height and the gradient vector representing the surface in the same minimization, it constructs the surface correctly. A variant of this model is then introduced, which includes a vector matching term. This matching term gives the model capability to accurately represent the shape of a geometry in the reconstruction. Experiments show a significant improvement over the state-of-the-art models, such as the TV model, higher order TV models, and the anisotropic third-order regularization model, when applied to some general applications. In another work, the thesis generalizes the TV-Stokes model from two dimensions to an arbitrary number of dimensions, introducing a convenient form for the constraint in order it to be extended to higher dimensions. The thesis explores also the idea of feature accumulation through iterative regularization in another work, introducing a Richardson-like iteration for the TV-Stokes. Thisis then followed by a more general model, a combined model, based on the modified variant of the TV-stokes. The resulting model is found to be equivalent to the well-known TGV model. The thesis introduces some interesting numerical strategies for the solution of the TV-Stokes model and its variants. Higher order PDEs are turned into inhomogeneous modified Helmholtz equations through transformations. These equations are then solved using the preconditioned conjugate gradients method or the fast Fourier transformation. The thesis proposes a simple but quite general approach to finding closed form solutions to a general L1 minimization problem, and applies it to design algorithms for our models.Doktorgradsavhandlin

    Robust inversion and detection techniques for improved imaging performance

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn this thesis we aim to improve the performance of information extraction from imaging systems through three thrusts. First, we develop improved image formation methods for physics-based, complex-valued sensing problems. We propose a regularized inversion method that incorporates prior information about the underlying field into the inversion framework for ultrasound imaging. We use experimental ultrasound data to compute inversion results with the proposed formulation and compare it with conventional inversion techniques to show the robustness of the proposed technique to loss of data. Second, we propose methods that combine inversion and detection in a unified framework to improve imaging performance. This framework is applicable for cases where the underlying field is label-based such that each pixel of the underlying field can only assume values from a discrete, limited set. We consider this unified framework in the context of combinatorial optimization and propose graph-cut based methods that would result in label-based images, thereby eliminating the need for a separate detection step. Finally, we propose a robust method of object detection from microscopic nanoparticle images. In particular, we focus on a portable, low cost interferometric imaging platform and propose robust detection algorithms using tools from computer vision. We model the electromagnetic image formation process and use this model to create an enhanced detection technique. The effectiveness of the proposed technique is demonstrated using manually labeled ground-truth data. In addition, we extend these tools to develop a detection based autofocusing algorithm tailored for the high numerical aperture interferometric microscope

    Advanced data analysis for traction force microscopy and data-driven discovery of physical equations

    Get PDF
    The plummeting cost of collecting and storing data and the increasingly available computational power in the last decade have led to the emergence of new data analysis approaches in various scientific fields. Frequently, the new statistical methodology is employed for analyzing data involving incomplete or unknown information. In this thesis, new statistical approaches are developed for improving the accuracy of traction force microscopy (TFM) and data-driven discovery of physical equations. TFM is a versatile method for the reconstruction of a spatial image of the traction forces exerted by cells on elastic gel substrates. The traction force field is calculated from a linear mechanical model connecting the measured substrate displacements with the sought-for cell-generated stresses in real or Fourier space, which is an inverse and ill-posed problem. This inverse problem is commonly solved making use of regularization methods. Here, we systematically test the performance of new regularization methods and Bayesian inference for quantifying the parameter uncertainty in TFM. We compare two classical schemes, L1- and L2-regularization with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. We find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization. We further combine the Bayesian L2 regularization with the computational speed of Fast Fourier Transform algorithms to develop a fully automated method for noise reduction and robust, standardized traction-force reconstruction that we call Bayesian Fourier transform traction cytometry (BFTTC). This method is made freely available as a software package with graphical user-interface for intuitive usage. Using synthetic data and experimental data, we show that these Bayesian methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Next, we employ our methodology developed for the solution of inverse problems for automated, data-driven discovery of ordinary differential equations (ODEs), partial differential equations (PDEs), and stochastic differential equations (SDEs). To find the equations governing a measured time-dependent process, we construct dictionaries of non-linear candidate equations. These candidate equations are evaluated using the measured data. With this approach, one can construct a likelihood function for the candidate equations. Optimization yields a linear, inverse problem which is to be solved under a sparsity constraint. We combine Bayesian compressive sensing using Laplace priors with automated thresholding to develop a new approach, namely automatic threshold sparse Bayesian learning (ATSBL). ATSBL is a robust method to identify ODEs, PDEs, and SDEs involving Gaussian noise, which is also referred to as type I noise. We extensively test the method with synthetic datasets describing physical processes. For SDEs, we combine data-driven inference using ATSBL with a novel entropy-based heuristic for discarding data points with high uncertainty. Finally, we develop an automatic iterative sampling optimization technique akin to Umbrella sampling. Therewith, we demonstrate that data-driven inference of SDEs can be substantially improved through feedback during the inference process if the stochastic process under investigation can be manipulated either experimentally or in simulations
    corecore