4,126 research outputs found

    LASSO ISOtone for High Dimensional Additive Isotonic Regression

    Full text link
    Additive isotonic regression attempts to determine the relationship between a multi-dimensional observation variable and a response, under the constraint that the estimate is the additive sum of univariate component effects that are monotonically increasing. In this article, we present a new method for such regression called LASSO Isotone (LISO). LISO adapts ideas from sparse linear modelling to additive isotonic regression. Thus, it is viable in many situations with high dimensional predictor variables, where selection of significant versus insignificant variables are required. We suggest an algorithm involving a modification of the backfitting algorithm CPAV. We give a numerical convergence result, and finally examine some of its properties through simulations. We also suggest some possible extensions that improve performance, and allow calculation to be carried out when the direction of the monotonicity is unknown

    Dictionary Identification - Sparse Matrix-Factorisation via â„“1\ell_1-Minimisation

    Get PDF
    This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via â„“1\ell_1-minimisation. The problem can also be seen as factorising a \ddim \times \nsig matrix Y=(y_1 >... y_\nsig), y_n\in \R^\ddim of training signals into a \ddim \times \natoms dictionary matrix \dico and a \natoms \times \nsig coefficient matrix \X=(x_1... x_\nsig), x_n \in \R^\natoms, which is sparse. The exact question studied here is when a dictionary coefficient pair (\dico,\X) can be recovered as local minimum of a (nonconvex) â„“1\ell_1-criterion with input Y=\dico \X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples \nsig grows up to a logarithmic factor only linearly with the signal dimension, i.e. \nsig \approx C \natoms \log \natoms, in contrast to previous approaches requiring combinatorially many samples.Comment: 32 pages (IEEE draft format), 8 figures, submitted to IEEE Trans. Inf. Theor
    • …
    corecore