11 research outputs found

    Nonlinear regularization techniques for seismic tomography

    Full text link
    The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, 2\ell_2 penalties are compared to so-called sparsity promoting 1\ell_1 and 0\ell_0 penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an 2\ell_2 norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer 1\ell_1 damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple 2\ell_2 minimization (`Tikhonov regularization') which should be avoided. In some of our examples, the 0\ell_0 method produced notable artifacts. In addition we show how nonlinear 1\ell_1 methods for finding sparse models can be competitive in speed with the widely used 2\ell_2 methods, certainly under noisy conditions, so that there is no need to shun 1\ell_1 penalizations.Comment: 23 pages, 7 figures. Typographical error corrected in accelerated algorithms (14) and (20

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted 2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view

    Finding electrophysiological sources of aging-related processes using penalized least squares with Modified Newton-Raphson algorithm

    Get PDF
    In this work, we evaluate the flexibility of a modified Newton-Raphson (MNR) algorithm for finding electrophysiological sources in both simulated and real data, and then apply it to different penalized models in order to compare the sources of the EEG theta rhythm in two groups of elderly subjects with different levels of declined physical performance. As a first goal, we propose the MNR algorithm for estimating general multiple penalized least squares (MPLS) models and show that it is capable to find solutions that are simultaneously sparse and smooth. This algorithm allowed to address known and novel models such as the Smooth Non-negative Garrote and the Non-negative Smooth LASSO. We test its ability to solve the EEG inverse problem with multiple penalties -using simulated data- in terms of localization error, blurring and visibility, as compared with traditional algorithms. As a second goal, we explore the electrophysiological sources of the theta activity extracted from resting-state EEG recorded in two groups of older adults, which belong to a longitudinal study to assess the relationship between measures of physical performance (gait speed) decline and normal cognition. The groups contained subjects with good and bad physical performance in the two evaluations (6 years apart). In accordance to clinical studies, we found differences in EEG theta sources for the two groups, specifically, subjects with declined physical performance presented decreased temporal sources while increased prefrontal sources that seem to reflect compensating mechanisms to ensure a stable walking

    From Dark Matter to the Earth's Deep Interior: There and Back Again

    Get PDF
    This thesis is a two-way transfer of knowledge between cosmology and seismology, aiming to substantially advance imaging methods and uncertainty quantification in both fields. I develop a method using wavelets to simulate the uncertainty in a set of existing global seismic tomography images to assess the robustness of mantle plume-like structures. Several plumes are identified, including one that is rarely discussed in the seismological literature. I present a new classification of the most likely deep mantle plumes from my automated method, potentially resolving past discrepancies between deep mantle plumes inferred by visual analysis of tomography models and other geophysical data. Following on from this, I create new images of the upper-most mantle and their associated uncertainties using a sparsity-promoting wavelet prior and an advanced probabilistic inversion scheme. These new images exhibit the expected tectonic features such as plate boundaries and continental cratons. Importantly, the uncertainties obtained are physically reasonable and informative, in that they reflect the heterogenous data distribution and also highlight artefacts due to an incomplete forward model. These inversions are a first step towards building a fully probabilistic upper-mantle model in a sparse wavelet basis. I then apply the same advanced probabilistic method to the problem of full-sky cosmological mass-mapping. However, this is severely limited by the computational complexity of high-resolution spherical harmonic transforms. In response to this, I use, for the first time in cosmology, a trans-dimensional algorithm to build galaxy cluster-scale mass-maps. This new approach performs better than the standard mass-mapping method, with the added benefit that uncertainties are naturally recovered. With more accurate mass-maps and uncertainties, this method will be a valuable tool for cosmological inference with the new high-resolution data expected from upcoming galaxy surveys, potentially providing new insights into the interactions of dark matter particles in colliding galaxy cluster systems

    Adaptive sparse coding and dictionary selection

    Get PDF
    Grant no. D000246/1.The sparse coding is approximation/representation of signals with the minimum number of coefficients using an overcomplete set of elementary functions. This kind of approximations/ representations has found numerous applications in source separation, denoising, coding and compressed sensing. The adaptation of the sparse approximation framework to the coding problem of signals is investigated in this thesis. Open problems are the selection of appropriate models and their orders, coefficient quantization and sparse approximation method. Some of these questions are addressed in this thesis and novel methods developed. Because almost all recent communication and storage systems are digital, an easy method to compute quantized sparse approximations is introduced in the first part. The model selection problem is investigated next. The linear model can be adapted to better fit a given signal class. It can also be designed based on some a priori information about the model. Two novel dictionary selection methods are separately presented in the second part of the thesis. The proposed model adaption algorithm, called Dictionary Learning with the Majorization Method (DLMM), is much more general than current methods. This generality allowes it to be used with different constraints on the model. Particularly, two important cases have been considered in this thesis for the first time, Parsimonious Dictionary Learning (PDL) and Compressible Dictionary Learning (CDL). When the generative model order is not given, PDL not only adapts the dictionary to the given class of signals, but also reduces the model order redundancies. When a fast dictionary is needed, the CDL framework helps us to find a dictionary which is adapted to the given signal class without increasing the computation cost so much. Sometimes a priori information about the linear generative model is given in format of a parametric function. Parametric Dictionary Design (PDD) generates a suitable dictionary for sparse coding using the parametric function. Basically PDD finds a parametric dictionary with a minimal dictionary coherence, which has been shown to be suitable for sparse approximation and exact sparse recovery. Theoretical analyzes are accompanied by experiments to validate the analyzes. This research was primarily used for audio applications, as audio can be shown to have sparse structures. Therefore, most of the experiments are done using audio signals

    NON-LINEAR AND SPARSE REPRESENTATIONS FOR MULTI-MODAL RECOGNITION

    Get PDF
    In the first part of this dissertation, we address the problem of representing 2D and 3D shapes. In particular, we introduce a novel implicit shape representation based on Support Vector Machine (SVM) theory. Each shape is represented by an analytic decision function obtained by training an SVM, with a Radial Basis Function (RBF) kernel, so that the interior shape points are given higher values. This empowers support vector shape (SVS) with multifold advantages. First, the representation uses a sparse subset of feature points determined by the support vectors, which significantly improves the discriminative power against noise, fragmentation and other artifacts that often come with the data. Second, the use of the RBF kernel provides scale, rotation, and translation invariant features, and allows a shape to be represented accurately regardless of its complexity. Finally, the decision function can be used to select reliable feature points. These features are described using gradients computed from highly consistent decision functions instead of conventional edges. Our experiments on 2D and 3D shapes demonstrate promising results. The availability of inexpensive 3D sensors like Kinect necessitates the design of new representation for this type of data. We present a 3D feature descriptor that represents local topologies within a set of folded concentric rings by distances from local points to a projection plane. This feature, called as Concentric Ring Signature (CORS), possesses similar computational advantages to point signatures yet provides more accurate matches. CORS produces compact and discriminative descriptors, which makes it more robust to noise and occlusions. It is also well-known to computer vision researchers that there is no universal representation that is optimal for all types of data or tasks. Sparsity has proved to be a good criterion for working with natural images. This motivates us to develop efficient sparse and non-linear learning techniques for automatically extracting useful information from visual data. Specifically, we present dictionary learning methods for sparse and redundant representations in a high-dimensional feature space. Using the kernel method, we describe how the well-known dictionary learning approaches such as the method of optimal directions and KSVD can be made non-linear. We analyse their kernel constructions and demonstrate their effectiveness through several experiments on classification problems. It is shown that non-linear dictionary learning approaches can provide significantly better discrimination compared to their linear counterparts and kernel PCA, especially when the data is corrupted by different types of degradations. Visual descriptors are often high dimensional. This results in high computational complexity for sparse learning algorithms. Motivated by this observation, we introduce a novel framework, called sparse embedding (SE), for simultaneous dimensionality reduction and dictionary learning. We formulate an optimization problem for learning a transformation from the original signal domain to a lower-dimensional one in a way that preserves the sparse structure of data. We propose an efficient optimization algorithm and present its non-linear extension based on the kernel methods. One of the key features of our method is that it is computationally efficient as the learning is done in the lower-dimensional space and it discards the irrelevant part of the signal that derails the dictionary learning process. Various experiments show that our method is able to capture the meaningful structure of data and can perform significantly better than many competitive algorithms on signal recovery and object classification tasks. In many practical applications, we are often confronted with the situation where the data that we use to train our models are different from that presented during the testing. In the final part of this dissertation, we present a novel framework for domain adaptation using a sparse and hierarchical network (DASH-N), which makes use of the old data to improve the performance of a system operating on a new domain. Our network jointly learns a hierarchy of features together with transformations that rectify the mismatch between different domains. The building block of DASH-N is the latent sparse representation. It employs a dimensionality reduction step that can prevent the data dimension from increasing too fast as traversing deeper into the hierarchy. Experimental results show that our method consistently outperforms the current state-of-the-art by a significant margin. Moreover, we found that a multi-layer {DASH-N} has an edge over the single-layer DASH-N

    Randomness as a computational strategy : on matrix and tensor decompositions

    Get PDF
    Matrix and tensor decompositions are fundamental tools for finding structure and data processing. In particular, the efficient computation of low-rank matrix approximations is an ubiquitous problem in the area of machine learning and elsewhere. However, massive data arrays pose a computational challenge for these techniques, placing significant constraints on both memory and processing power. Recently, the fascinating and powerful concept of randomness has been introduced as a strategy to ease the computational load of deterministic matrix and data algorithms. The basic idea of these algorithms is to employ a degree of randomness as part of the logic in order to derive from a high-dimensional input matrix a smaller matrix, which captures the essential information of the original data matrix. Subsequently, the smaller matrix is then used to efficiently compute a near-optimal low-rank approximation. Randomized algorithms have been shown to be robust, highly reliable, and computationally efficient, yet simple to implement. In particular, the development of the randomized singular value decomposition can be seen as a milestone in the era of ‘big data’. Building up on the great success of this probabilistic strategy to compute low-rank matrix decompositions, this thesis introduces a set of new randomized algorithms. Specifically, we present a randomized algorithm to compute the dynamic mode decomposition, which is a modern dimension reduction technique designed to extract dynamic information from dynamical systems. Then, we advocate the randomized dynamic mode decomposition for background modeling of surveillance video feeds. Further, we show that randomized algorithms are embarrassingly parallel by design and that graphics processing units (GPUs) can be utilized to substantially accelerate the computations. Finally, the concept of randomized algorithms is generalized for tensors in order to compute the canonical CANDECOMP/PARAFAC (CP) decomposition

    Nonsmooth Convex Variational Approaches to Image Analysis

    Get PDF
    Variational models constitute a foundation for the formulation and understanding of models in many areas of image processing and analysis. In this work, we consider a generic variational framework for convex relaxations of multiclass labeling problems, formulated on continuous domains. We propose several relaxations for length-based regularizers, with varying expressiveness and computational cost. In contrast to graph-based, combinatorial approaches, we rely on a geometric measure theory-based formulation, which avoids artifacts caused by an early discretization in theory as well as in practice. We investigate and compare numerical first-order approaches for solving the associated nonsmooth discretized problem, based on controlled smoothing and operator splitting. In order to obtain integral solutions, we propose a randomized rounding technique formulated in the spatially continuous setting, and prove that it allows to obtain solutions with an a priori optimality bound. Furthermore, we present a method for introducing more advanced prior shape knowledge into labeling problems, based on the sparse representation framework

    Thresholded smoothed ℓ0 norm for accelerated sparse recovery

    No full text
    Smoothed ℓ<inf>0</inf> norm (SL0) is a fast and complex domain extendible sparse recovery algorithm which is suitable for many practical real-time applications. In this letter, we propose an improved algorithm termed 'Thresholded Smoothed ℓ<inf>0</inf> Norm (T-SL0) ' for accelerating the iterative process of SL0. T-SL0 introduces an iterative efficiency indicator and compares it with a preset threshold in real time to determine whether or not the current iteration should be executed. Through identifying and bypassing low efficient iterations, our approach converges much faster than the original SL0 algorithm. Experimental results are presented to demonstrate that our approach can accelerate SL0 significantly without loss of accuracy. © 2015 IEEE

    Handbook of Mathematical Geosciences

    Get PDF
    This Open Access handbook published at the IAMG's 50th anniversary, presents a compilation of invited path-breaking research contributions by award-winning geoscientists who have been instrumental in shaping the IAMG. It contains 45 chapters that are categorized broadly into five parts (i) theory, (ii) general applications, (iii) exploration and resource estimation, (iv) reviews, and (v) reminiscences covering related topics like mathematical geosciences, mathematical morphology, geostatistics, fractals and multifractals, spatial statistics, multipoint geostatistics, compositional data analysis, informatics, geocomputation, numerical methods, and chaos theory in the geosciences
    corecore