157 research outputs found

    Robust inversion and detection techniques for improved imaging performance

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn this thesis we aim to improve the performance of information extraction from imaging systems through three thrusts. First, we develop improved image formation methods for physics-based, complex-valued sensing problems. We propose a regularized inversion method that incorporates prior information about the underlying field into the inversion framework for ultrasound imaging. We use experimental ultrasound data to compute inversion results with the proposed formulation and compare it with conventional inversion techniques to show the robustness of the proposed technique to loss of data. Second, we propose methods that combine inversion and detection in a unified framework to improve imaging performance. This framework is applicable for cases where the underlying field is label-based such that each pixel of the underlying field can only assume values from a discrete, limited set. We consider this unified framework in the context of combinatorial optimization and propose graph-cut based methods that would result in label-based images, thereby eliminating the need for a separate detection step. Finally, we propose a robust method of object detection from microscopic nanoparticle images. In particular, we focus on a portable, low cost interferometric imaging platform and propose robust detection algorithms using tools from computer vision. We model the electromagnetic image formation process and use this model to create an enhanced detection technique. The effectiveness of the proposed technique is demonstrated using manually labeled ground-truth data. In addition, we extend these tools to develop a detection based autofocusing algorithm tailored for the high numerical aperture interferometric microscope

    Knowledge-Guided Bayesian Support Vector Machine Methods For High-Dimensional Data

    Get PDF
    Support vector machines (SVM) is a popular classification method for analysis of high dimensional data such as genomics data. Recently, new SVM methods have been developed to achieve variable selection through either frequentist regularization or Bayesian shrinkage. The Bayesian framework provides a probabilistic interpretation for SVM and allows direct uncertainty quantification. In this dissertation, we develop four knowledge-guided SVM methods for the analysis of high dimensional data. In Chapter 1, I first review the theory of SVM and existing methods for incorporating the prior knowledge, represented bby graphs into SVM. Second, I review the terminology on variable selection and limitations of the existing methods for SVM variable selection. Last, I introduce some Bayesian variable selection techniques as well as Markov chain Monte Carlo (MCMC) algorithms . In Chapter 2, we develop a new Bayesian SVM method that enables variable selection guided by structural information among predictors, e.g, biological pathways among genes. This method uses a spike and slab prior for feature selection combined with an Ising prior for incorporating structural information. The performance of the proposed method is evaluated in comparison with existing SVM methods in terms of prediction and feature selection in extensive simulations. Furthermore, the proposed method is illustrated in analysis of genomic data from a cancer study, demonstrating its advantage in generating biologically meaningful results and identifying potentially important features. The model developed in Chapter 2 might suffer from the issue of phase transition \citep{li2010bayesian} when the number of variables becomes extremely large. In Chapter 3, we propose another Bayesian SVM method that assigns an adaptive structured shrinkage prior to the coefficients and the graph information is incorporated via the hyper-priors imposed on the precision matrix of the log-transformed shrinkage parameters. This method is shown to outperform the method in Chapter 2 in both simulations and real data analysis.. In Chapter 4, to relax the linearity assumption in chapter 2 and 3, we develop a novel knowledge-guided Bayesian non-linear SVM. The proposed method uses a diagonal matrix with ones representing feature selected and zeros representing feature unselected, and combines with the Ising prior to perform feature selection. The performance of our method is evaluated and compared with several penalized linear SVM and the standard kernel SVM method in terms of prediction and feature selection in extensive simulation settings. Also, analyses of genomic data from a cancer study show that our method yields a more accurate prediction model for patient survival and reveals biologically more meaningful results than the existing methods. In Chapter 5, we extend the work of Chapter 4 and use a joint model to identify the relevant features and learn the structural information among them simultaneously. This model does not require that the structural information among the predictors is known, which is more powerful when the prior knowledge about pathways is limited or inaccurate. We demonstrate that our method outperforms the method developed in Chapter 4 when the prior knowledge is partially true or inaccurate in simulations and illustrate our proposed model with an application to a gliobastoma data set. In Chapter 6, we propose some future works including extending our methods to more general types of outcomes such as categorical or continuous variables

    Optimization for Image Segmentation

    Get PDF
    Image segmentation, i.e., assigning each pixel a discrete label, is an essential task in computer vision with lots of applications. Major techniques for segmentation include for example Markov Random Field (MRF), Kernel Clustering (KC), and nowadays popular Convolutional Neural Networks (CNN). In this work, we focus on optimization for image segmentation. Techniques like MRF, KC, and CNN optimize MRF energies, KC criteria, or CNN losses respectively, and their corresponding optimization is very different. We are interested in the synergy and the complementary benefits of MRF, KC, and CNN for interactive segmentation and semantic segmentation. Our first contribution is pseudo-bound optimization for binary MRF energies that are high-order or non-submodular. Secondly, we propose Kernel Cut, a novel formulation for segmentation, which combines MRF regularization with Kernel Clustering. We show why to combine KC with MRF and how to optimize the joint objective. In the third part, we discuss how deep CNN segmentation can benefit from non-deep (i.e., shallow) methods like MRF and KC. In particular, we propose regularized losses for weakly-supervised CNN segmentation, in which we can integrate MRF energy or KC criteria as part of the losses. Minimization of regularized losses is a principled approach to semi-supervised learning, in general. Our regularized loss method is very simple and allows different kinds of regularization losses for CNN segmentation. We also study the optimization of regularized losses beyond gradient descent. Our regularized losses approach achieves state-of-the-art accuracy in semantic segmentation with near full supervision quality

    Optimization of Markov Random Fields in Computer Vision

    No full text
    A large variety of computer vision tasks can be formulated using Markov Random Fields (MRF). Except in certain special cases, optimizing an MRF is intractable, due to a large number of variables and complex dependencies between them. In this thesis, we present new algorithms to perform inference in MRFs, that are either more efficient (in terms of running time and/or memory usage) or more effective (in terms of solution quality), than the state-of-the-art methods. First, we introduce a memory efficient max-flow algorithm for multi-label submodular MRFs. In fact, such MRFs have been shown to be optimally solvable using max-flow based on an encoding of the labels proposed by Ishikawa, in which each variable XiX_i is represented by \ell nodes (where \ell is the number of labels) arranged in a column. However, this method in general requires 222\,\ell^2 edges for each pair of neighbouring variables. This makes it inapplicable to realistic problems with many variables and labels, due to excessive memory requirement. By contrast, our max-flow algorithm stores 22\,\ell values per variable pair, requiring much less storage. Consequently, our algorithm makes it possible to optimally solve multi-label submodular problems involving large numbers of variables and labels on a standard computer. Next, we present a move-making style algorithm for multi-label MRFs with robust non-convex priors. In particular, our algorithm iteratively approximates the original MRF energy with an appropriately weighted surrogate energy that is easier to minimize. Furthermore, it guarantees that the original energy decreases at each iteration. To this end, we consider the scenario where the weighted surrogate energy is multi-label submodular (i.e., it can be optimally minimized by max-flow), and show that our algorithm then lets us handle of a large variety of non-convex priors. Finally, we consider the fully connected Conditional Random Field (dense CRF) with Gaussian pairwise potentials that has proven popular and effective for multi-class semantic segmentation. While the energy of a dense CRF can be minimized accurately using a Linear Programming (LP) relaxation, the state-of-the-art algorithm is too slow to be useful in practice. To alleviate this deficiency, we introduce an efficient LP minimization algorithm for dense CRFs. To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block-coordinate descent. We show that each block of variables can be optimized in a time linear in the number of pixels and labels. Consequently, our algorithm enables efficient and effective optimization of dense CRFs with Gaussian pairwise potentials. We evaluated all our algorithms on standard energy minimization datasets consisting of computer vision problems, such as stereo, inpainting and semantic segmentation. The experiments at the end of each chapter provide compelling evidence that all our approaches are either more efficient or more effective than all existing baselines

    Rekonstrukcija signala iz nepotpunih merenja sa primenom u ubrzanju algoritama za rekonstrukciju slike magnetne rezonance

    Get PDF
    In dissertation a problem of reconstruction of images from undersampled measurements is considered which has direct application in creation of magnetic resonance images. The topic of the research is proposition of new regularization based methods for image reconstruction which are based on statistical Markov random field models and theory of compressive sensing. With the proposed signal model which follows the statistics of images, a new regularization functions are defined and four methods for reconstruction of magnetic resonance images are derived.У докторској дисертацији разматран је проблем реконструкције сигнала слике из непотпуних мерења који има директну примену у креирању слика магнетне резнонаце. Предмет истраживања је везан за предлог нових регуларизационих метода реконструкције коришћењем статистичких модела Марковљевог случајног поља и теорије ретке репрезентације сигнала. На основу предложеног модела који на веродостојан начин репрезентује статистику сигнала слике предложене су регуларизационе функције и креирана четири алгоритма за реконструкцију слике магнетне резонанце.U doktorskoj disertaciji razmatran je problem rekonstrukcije signala slike iz nepotpunih merenja koji ima direktnu primenu u kreiranju slika magnetne reznonace. Predmet istraživanja je vezan za predlog novih regularizacionih metoda rekonstrukcije korišćenjem statističkih modela Markovljevog slučajnog polja i teorije retke reprezentacije signala. Na osnovu predloženog modela koji na verodostojan način reprezentuje statistiku signala slike predložene su regularizacione funkcije i kreirana četiri algoritma za rekonstrukciju slike magnetne rezonance

    Regularizers for Vector-Valued Data and Labeling Problems in Image Processing

    No full text
    Дан обзор последних результатов в области регуляризаторов, основанных на полных вариациях, применительно к векторным данным. Результаты оказались полезными для хранения или улучшения мультимодальных данных и задач разметки на непрерывной области определения. Возможные регуляризаторы и их свойства рассматриваются в рамках единой модели.The review of recent developments on total variation-based regularizers is given with the emphasis on vector-valued data. These have been proven to be useful for restoring or enhancing data with multiple channels, and find particular use in relaxation techniques for labeling problems on continuous domains. The possible regularizers and their properties are considered in a unified framework.Наведено огляд останніх результатів у галузі регуляризаторів, що базуються на повних варіаціях, стосовно векторних даних. Результати виявилися корисними для зберігання та покращення мультимодальних даних і задач розмітки на неперервній області визначення. Можливі регуляризатори та їх властивості розглядаються в рамках єдиної моделі
    corecore