31 research outputs found

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of ℓ2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    A comparison of methods for temporal analysis of aoristic crime

    Get PDF
    Objectives: To test the accuracy of various methods previously proposed (and one new method) to estimate offence times where the actual time of the event is not known. Methods: For 303 thefts of pedal cycles from railway stations, the actual offence time was determined from closed-circuit television and the resulting temporal distribution compared against commonly-used estimated distributions using circular statistics and analysis of residuals. Results: Aoristic analysis and allocation of a random time to each offence allow accurate estimation of peak offence times. Commonly-used deterministic methods were found to be inaccurate and to produce misleading results. Conclusions: It is important that analysts use the most accurate methods for temporal distribution approximation to ensure any resource decisions made on the basis of peak times are reliable

    Additive and Generalized Additive Models

    Get PDF
    This paper is the attempt to summarize the state of art in additive and generalized additive models (GAM). The emphasis is on approaches and numerical procedures which have emerged since the monograph of Hastie and Tibshirani (1990) although reconsidering certain aspects of their work. Apart from GAM, vector GAM (VGAM), alternating conditional expectations (ACE), and additivity and variance stabilization (AVAS) are discussed. Last but not least there are software hints for all these models

    Additive and generalized additive models: a survey

    No full text
    SIGLEAvailable from Bibliothek des Instituts fuer Weltwirtschaft, ZBW, Duesternbrook Weg 120, D-24105 Kiel W 1190 (97) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman

    Measurement and analysis of sarcomere length in rat cardiomyocytes in situ and in vitro

    No full text
    Sarcomere length (SL) is an important determinant and indicator of cardiac mechanical function; however, techniques for measuring SL in living, intact tissue are limited. Here, we present a technique that uses two-photon microscopy to directly image striations of living cells in cardioplegic conditions, both in situ (Langendorff-perfused rat hearts and ventricular tissue slices, stained with the fluorescent marker di-4-ANEPPS) and in vitro (acutely isolated rat ventricular myocytes). Software was developed to extract SL from two-photon fluorescence image sets while accounting for measurement errors associated with motion artifact in raster-scanned images and uncertainty of the cell angle relative to the imaging plane. Monte-Carlo simulations were used to guide analysis of SL measurements by determining error bounds as a function of measurement path length. The mode of the distribution of SL measurements in resting Langendorff-perfused heart is 1.95 Îźm (n = 167 measurements from N = 11 hearts) after correction for tissue orientation, which was significantly greater than that in isolated cells (1.71 Îźm, n = 346, N = 9 isolations) or ventricular slice preparations (1.79 Îźm, n = 79, N = 3 hearts) under our experimental conditions. Furthermore, we find that edema in arrested Langendorff-perfused heart is associated with a mean SL increase; this occurs as a function of time ex vivo and correlates with tissue volume changes determined by magnetic resonance imaging. Our results highlight that the proposed method can be used to monitor SL in living cells and that different experimental models from the same species may display significantly different SL values under otherwise comparable conditions, which has implications for experiment design, as well as comparison and interpretation of data
    corecore