832 research outputs found

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Sketching for Large-Scale Learning of Mixture Models

    Get PDF
    Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data. This sketch is a collection of generalized moments of the underlying probability distribution of the data. It can be computed in a single pass on the training set, and is easily computable on streams or distributed datasets. The proposed framework shares similarities with compressive sensing, which aims at drastically reducing the dimension of high-dimensional signals while preserving the ability to reconstruct them. To perform the estimation task, we derive an iterative algorithm analogous to sparse reconstruction algorithms in the context of linear inverse problems. We exemplify our framework with the compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics on the choice of the sketching procedure and theoretical guarantees of reconstruction. We experimentally show on synthetic data that the proposed algorithm yields results comparable to the classical Expectation-Maximization (EM) technique while requiring significantly less memory and fewer computations when the number of database elements is large. We further demonstrate the potential of the approach on real large-scale data (over 10 8 training samples) for the task of model-based speaker verification. Finally, we draw some connections between the proposed framework and approximate Hilbert space embedding of probability distributions using random features. We show that the proposed sketching operator can be seen as an innovative method to design translation-invariant kernels adapted to the analysis of GMMs. We also use this theoretical framework to derive information preservation guarantees, in the spirit of infinite-dimensional compressive sensing

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Dimensionality reduction and sparse representations in computer vision

    Get PDF
    The proliferation of camera equipped devices, such as netbooks, smartphones and game stations, has led to a significant increase in the production of visual content. This visual information could be used for understanding the environment and offering a natural interface between the users and their surroundings. However, the massive amounts of data and the high computational cost associated with them, encumbers the transfer of sophisticated vision algorithms to real life systems, especially ones that exhibit resource limitations such as restrictions in available memory, processing power and bandwidth. One approach for tackling these issues is to generate compact and descriptive representations of image data by exploiting inherent redundancies. We propose the investigation of dimensionality reduction and sparse representations in order to accomplish this task. In dimensionality reduction, the aim is to reduce the dimensions of the space where image data reside in order to allow resource constrained systems to handle them and, ideally, provide a more insightful description. This goal is achieved by exploiting the inherent redundancies that many classes of images, such as faces under different illumination conditions and objects from different viewpoints, exhibit. We explore the description of natural images by low dimensional non-linear models called image manifolds and investigate the performance of computer vision tasks such as recognition and classification using these low dimensional models. In addition to dimensionality reduction, we study a novel approach in representing images as a sparse linear combination of dictionary examples. We investigate how sparse image representations can be used for a variety of tasks including low level image modeling and higher level semantic information extraction. Using tools from dimensionality reduction and sparse representation, we propose the application of these methods in three hierarchical image layers, namely low-level features, mid-level structures and high-level attributes. Low level features are image descriptors that can be extracted directly from the raw image pixels and include pixel intensities, histograms, and gradients. In the first part of this work, we explore how various techniques in dimensionality reduction, ranging from traditional image compression to the recently proposed Random Projections method, affect the performance of computer vision algorithms such as face detection and face recognition. In addition, we discuss a method that is able to increase the spatial resolution of a single image, without using any training examples, according to the sparse representations framework. In the second part, we explore mid-level structures, including image manifolds and sparse models, produced by abstracting information from low-level features and offer compact modeling of high dimensional data. We propose novel techniques for generating more descriptive image representations and investigate their application in face recognition and object tracking. In the third part of this work, we propose the investigation of a novel framework for representing the semantic contents of images. This framework employs high level semantic attributes that aim to bridge the gap between the visual information of an image and its textual description by utilizing low level features and mid level structures. This innovative paradigm offers revolutionary possibilities including recognizing the category of an object from purely textual information without providing any explicit visual example

    Improving the Practicality of Model-Based Reinforcement Learning: An Investigation into Scaling up Model-Based Methods in Online Settings

    Get PDF
    This thesis is a response to the current scarcity of practical model-based control algorithms in the reinforcement learning (RL) framework. As of yet there is no consensus on how best to integrate imperfect transition models into RL whilst mitigating policy improvement instabilities in online settings. Current state-of-the-art policy learning algorithms that surpass human performance often rely on model-free approaches that enjoy unmitigated sampling of transition data. Model-based RL (MBRL) instead attempts to distil experience into transition models that allow agents to plan new policies without needing to return to the environment and sample more data. The initial focus of this investigation is on kernel conditional mean embeddings (CMEs) (Song et al., 2009) deployed in an approximate policy iteration (API) algorithm (GrĂŒnewĂ€lder et al., 2012a). This existing MBRL algorithm boasts theoretically stable policy updates in continuous state and discrete action spaces. The Bellman operator’s value function and (transition) conditional expectation are modelled and embedded respectively as functions in a reproducing kernel Hilbert space (RKHS). The resulting finite-induced approximate pseudo-MDP (Yao et al., 2014a) can be solved exactly in a dynamic programming algorithm with policy improvement suboptimality guarantees. However model construction and policy planning scale cubically and quadratically respectively with the training set size, rendering the CME impractical for sampleabundant tasks in online settings. Three variants of CME API are investigated to strike a balance between stable policy updates and reduced computational complexity. The first variant models the value function and state-action representation explicitly in a parametric CME (PCME) algorithm with favourable computational complexity. However a soft conservative policy update technique is developed to mitigate policy learning oscillations in the planning process. The second variant returns to the non-parametric embedding and contributes (along with external work) to the compressed CME (CCME); a sparse and computationally more favourable CME. The final variant is a fully end-to-end differentiable embedding trained with stochastic gradient updates. The value function remains modelled in an RKHS such that backprop is driven by a non-parametric RKHS loss function. Actively compressed CME (ACCME) satisfies the pseudo-MDP contraction constraint using a sparse softmax activation function. The size of the pseudo-MDP (i.e. the size of the embedding’s last layer) is controlled by sparsifying the last layer weight matrix by extending the truncated gradient method (Langford et al., 2009) with group lasso updates in a novel ‘use it or lose it’ neuron pruning mechanism. Surprisingly this technique does not require extensive fine-tuning between control tasks

    Compressed Sensing for Open-ended Waveguide Non-Destructive Testing and Evaluation

    Get PDF
    Ph. D. ThesisNon-destructive testing and evaluation (NDT&E) systems using open-ended waveguide (OEW) suffer from critical challenges. In the sensing stage, data acquisition is time-consuming by raster scan, which is difficult for on-line detection. Sensing stage also disregards demand for the latter feature extraction process, leading to an excessive amount of data and processing overhead for feature extraction. In the feature extraction stage, efficient and robust defect region segmentation in the obtained image is challenging for a complex image background. Compressed sensing (CS) demonstrates impressive data compression ability in various applications using sparse models. How to develop CS models in OEW NDT&E that jointly consider sensing & processing for fast data acquisition, data compression, efficient and robust feature extraction is remaining challenges. This thesis develops integrated sensing-processing CS models to address the drawbacks in OEW NDT systems and carries out their case studies in low-energy impact damage detection for carbon fibre reinforced plastics (CFRP) materials. The major contributions are: (1) For the challenge of fast data acquisition, an online CS model is developed to offer faster data acquisition and reduce data amount without any hardware modification. The images obtained with OEW are usually smooth which can be sparsely represented with discrete cosine transform (DCT) basis. Based on this information, a customised 0/1 Bernoulli matrix for CS measurement is designed for downsampling. The full data is reconstructed with orthogonal matching pursuit algorithm using the downsampling data, DCT basis, and the customised 0/1 Bernoulli matrix. It is hard to determine the sampling pixel numbers for sparse reconstruction when lacking training data, to address this issue, an accumulated sampling and recovery process is developed in this CS model. The defect region can be extracted with the proposed histogram threshold edge detection (HTED) algorithm after each recovery, which forms an online process. A case study in impact damage detection on CFRP materials is carried out for validation. The results show that the data acquisition time is reduced by one order of magnitude while maintaining equivalent image quality and defect region as raster scan. (2) For the challenge of efficient data compression that considers the later feature extraction, a feature-supervised CS data acquisition method is proposed and evaluated. It reserves interested features while reducing the data amount. The frequencies which reveal the feature only occupy a small part of the frequency band, this method finds these sparse frequency range firstly to supervise the later sampling process. Subsequently, based on joint sparsity of neighbour frame and the extracted frequency band, an aligned spatial-spectrum sampling scheme is proposed. The scheme only samples interested frequency range for required features by using a customised 0/1 Bernoulli measurement matrix. The interested spectral-spatial data are reconstructed jointly, which has much faster speed than frame-by-frame methods. The proposed feature-supervised CS data acquisition is implemented and compared with raster scan and the traditional CS reconstruction in impact damage detection on CFRP materials. The results show that the data amount is reduced greatly without compromising feature quality, and the gain in reconstruction speed is improved linearly with the number of measurements. (3) Based on the above CS-based data acquisition methods, CS models are developed to directly detect defect from CS data rather than using the reconstructed full spatial data. This method is robust to texture background and more time-efficient that HTED algorithm. Firstly, based on the histogram is invariant to down-sampling using the customised 0/1 Bernoulli measurement matrix, a qualitative method which only gives binary judgement of defect is developed. High probability of detection and accuracy is achieved compared to other methods. Secondly, a new greedy algorithm of sparse orthogonal matching pursuit (spOMP)-based defect region segmentation method is developed to quantitatively extract the defect region, because the conventional sparse reconstruction algorithms cannot properly use the sparse character of correlation between the measurement matrix and CS data. The proposed algorithms are faster and more robust to interference than other algorithms.China Scholarship Counci
    • 

    corecore