54,894 research outputs found

    Task adapted reconstruction for inverse problems

    Full text link
    The paper considers the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. A key aspect is to formalize the steps of reconstruction and task as appropriate estimators (non-randomized decision rules) in statistical estimation problems. The implementation makes use of (deep) neural networks to provide a differentiable parametrization of the family of estimators for both steps. These networks are combined and jointly trained against suitable supervised training data in order to minimize a joint differentiable loss function, resulting in an end-to-end task adapted reconstruction method. The suggested framework is generic, yet adaptable, with a plug-and-play structure for adjusting both the inverse problem and the task at hand. More precisely, the data model (forward operator and statistical model of the noise) associated with the inverse problem is exchangeable, e.g., by using neural network architecture given by a learned iterative method. Furthermore, any task that is encodable as a trainable neural network can be used. The approach is demonstrated on joint tomographic image reconstruction, classification and joint tomographic image reconstruction segmentation

    Sketching for Large-Scale Learning of Mixture Models

    Get PDF
    Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data. This sketch is a collection of generalized moments of the underlying probability distribution of the data. It can be computed in a single pass on the training set, and is easily computable on streams or distributed datasets. The proposed framework shares similarities with compressive sensing, which aims at drastically reducing the dimension of high-dimensional signals while preserving the ability to reconstruct them. To perform the estimation task, we derive an iterative algorithm analogous to sparse reconstruction algorithms in the context of linear inverse problems. We exemplify our framework with the compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics on the choice of the sketching procedure and theoretical guarantees of reconstruction. We experimentally show on synthetic data that the proposed algorithm yields results comparable to the classical Expectation-Maximization (EM) technique while requiring significantly less memory and fewer computations when the number of database elements is large. We further demonstrate the potential of the approach on real large-scale data (over 10 8 training samples) for the task of model-based speaker verification. Finally, we draw some connections between the proposed framework and approximate Hilbert space embedding of probability distributions using random features. We show that the proposed sketching operator can be seen as an innovative method to design translation-invariant kernels adapted to the analysis of GMMs. We also use this theoretical framework to derive information preservation guarantees, in the spirit of infinite-dimensional compressive sensing

    Task-Driven Dictionary Learning

    Get PDF
    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.Comment: final draft post-refereein

    Bayesian Reconstruction of Approximately Periodic Potentials at Finite Temperature

    Get PDF
    The paper discusses the reconstruction of potentials for quantum systems at finite temperatures from observational data. A nonparametric approach is developed, based on the framework of Bayesian statistics, to solve such inverse problems. Besides the specific model of quantum statistics giving the probability of observational data, a Bayesian approach is essentially based on "a priori" information available for the potential. Different possibilities to implement "a priori" information are discussed in detail, including hyperparameters, hyperfields, and non--Gaussian auxiliary fields. Special emphasis is put on the reconstruction of potentials with approximate periodicity. The feasibility of the approach is demonstrated for a numerical model.Comment: 18 pages, 17 figures, LaTe

    Uniform Penalty inversion of two-dimensional NMR Relaxation data

    Full text link
    The inversion of two-dimensional NMR data is an ill-posed problem related to the numerical computation of the inverse Laplace transform. In this paper we present the 2DUPEN algorithm that extends the Uniform Penalty (UPEN) algorithm [Borgia, Brown, Fantazzini, {\em Journal of Magnetic Resonance}, 1998] to two-dimensional data. The UPEN algorithm, defined for the inversion of one-dimensional NMR relaxation data, uses Tikhonov-like regularization and optionally non-negativity constraints in order to implement locally adapted regularization. In this paper, we analyze the regularization properties of this approach. Moreover, we extend the one-dimensional UPEN algorithm to the two-dimensional case and present an efficient implementation based on the Newton Projection method. Without any a-priori information on the noise norm, 2DUPEN automatically computes the locally adapted regularization parameters and the distribution of the unknown NMR parameters by using variable smoothing. Results of numerical experiments on simulated and real data are presented in order to illustrate the potential of the proposed method in reconstructing peaks and flat regions with the same accuracy
    • …
    corecore