5,306 research outputs found

    Bayesian orthogonal component analysis for sparse representation

    Get PDF
    This paper addresses the problem of identifying a lower dimensional space where observed data can be sparsely represented. This under-complete dictionary learning task can be formulated as a blind separation problem of sparse sources linearly mixed with an unknown orthogonal mixing matrix. This issue is formulated in a Bayesian framework. First, the unknown sparse sources are modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted mixture of an atom at zero and a Gaussian distribution is proposed as prior distribution for the unobserved sources. A non-informative prior distribution defined on an appropriate Stiefel manifold is elected for the mixing matrix. The Bayesian inference on the unknown parameters is conducted using a Markov chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is designed to generate samples asymptotically distributed according to the joint posterior distribution of the unknown model parameters and hyperparameters. These samples are then used to approximate the joint maximum a posteriori estimator of the sources and mixing matrix. Simulations conducted on synthetic data are reported to illustrate the performance of the method for recovering sparse representations. An application to sparse coding on under-complete dictionary is finally investigated.Comment: Revised version. Accepted to IEEE Trans. Signal Processin

    Model-based learning of local image features for unsupervised texture segmentation

    Full text link
    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Statistical Machine Learning for Breast Cancer Detection with Terahertz Imaging

    Get PDF
    Breast conserving surgery (BCS) is a common breast cancer treatment option, in which the cancerous tissue is excised while leaving most of the healthy breast tissue intact. The lack of in-situ margin evaluation unfortunately results in a re-excision rate of 20-30% for this type of procedure. This study aims to design statistical and machine learning segmentation algorithms for the detection of breast cancer in BCS by using terahertz (THz) imaging. Given the material characterization properties of the non-ionizing radiation in the THz range, we intend to employ the responses from the THz system to identify healthy and cancerous breast tissue in BCS samples. In particular, this dissertation covers the description of four segmentation algorithms for the detection of breast cancer in THz imaging. We first explore the performance of one-dimensional (1D) Gaussian mixture and t-mixture models with Markov chain Monte Carlo (MCMC). Second, we propose a novel low-dimension ordered orthogonal projection (LOOP) algorithm for the dimension reduction of the THz information through a modified Gram-Schmidt process. Once the key features within the THz waveform have been detected by LOOP, the segmentation algorithm employs a multivariate Gaussian mixture model with MCMC and expectation maximization (EM). Third, we explore the spatial information of each pixel within the THz image through a Markov random field (MRF) approach. Finally, we introduce a supervised multinomial probit regression algorithm with polynomial and kernel data representations. For evaluation purposes, this study makes use of fresh and formalin-fixed paraffin-embedded (FFPE) heterogeneous human and mice tissue models for the quantitative assessment of the segmentation performance in terms of receiver operating characteristics (ROC) curves. Overall, the experimental results demonstrate that the proposed approaches represent a promising technique for tissue segmentation within THz images of freshly excised breast cancer samples
    corecore