309 research outputs found

    Regularization-free estimation in trace regression with symmetric positive semidefinite matrices

    Full text link
    Over the past few years, trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing. Estimation of the underlying matrix from regularization-based approaches promoting low-rankedness, notably nuclear norm regularization, have enjoyed great popularity. In the present paper, we argue that such regularization may no longer be necessary if the underlying matrix is symmetric positive semidefinite (\textsf{spd}) and the design satisfies certain conditions. In this situation, simple least squares estimation subject to an \textsf{spd} constraint may perform as well as regularization-based approaches with a proper choice of the regularization parameter, which entails knowledge of the noise level and/or tuning. By contrast, constrained least squares estimation comes without any tuning parameter and may hence be preferred due to its simplicity

    Matrix factorization with Binary Components

    Full text link
    Motivated by an application in computational biology, we consider low-rank matrix factorization with {0,1}\{0,1\}-constraints on one of the factors and optionally convex constraints on the second one. In addition to the non-convexity shared with other matrix factorization schemes, our problem is further complicated by a combinatorial constraint set of size 2m⋅r2^{m \cdot r}, where mm is the dimension of the data points and rr the rank of the factorization. Despite apparent intractability, we provide - in the line of recent work on non-negative matrix factorization by Arora et al. (2012) - an algorithm that provably recovers the underlying factorization in the exact case with O(mr2r+mnr+r2n)O(m r 2^r + mnr + r^2 n) operations for nn datapoints. To obtain this result, we use theory around the Littlewood-Offord lemma from combinatorics.Comment: appeared in NIPS 201

    Feature selection guided by structural information

    Get PDF
    In generalized linear regression problems with an abundant number of features, lasso-type regularization which imposes an ℓ1\ell^1-constraint on the regression coefficients has become a widely established technique. Deficiencies of the lasso in certain scenarios, notably strongly correlated design, were unmasked when Zou and Hastie [J. Roy. Statist. Soc. Ser. B 67 (2005) 301--320] introduced the elastic net. In this paper we propose to extend the elastic net by admitting general nonnegative quadratic constraints as a second form of regularization. The generalized ridge-type constraint will typically make use of the known association structure of features, for example, by using temporal- or spatial closeness. We study properties of the resulting "structured elastic net" regression estimation procedure, including basic asymptotics and the issue of model selection consistency. In this vein, we provide an analog to the so-called "irrepresentable condition" which holds for the lasso. Moreover, we outline algorithmic solutions for the structured elastic net within the generalized linear model family. The rationale and the performance of our approach is illustrated by means of simulated and real world data, with a focus on signal regression.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS302 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Flaws, approximations and uncertainties in the estimation of the exposed-to-risk

    Get PDF
    Bibliography: leaves 62-64.This research analyses the theoretical basis of exposed-to-risk estimation. It defends the conventional actuarial approach against criticisms raised by Hoem (1984), and, in so doing, examines in detail the development of the actuarial profession's estimation techniques. Maximum likelihood estimates are shown to be closely related to the estimates of decremental probabilities derived using the conventional actuarial approach. The correct treatment of deaths when estimating the initial exposed-to-risk is considered and contrasted with what is often used in practice. The relationship between the initial and central exposed-to-risk is considered for a single decrement, two decrements and for select rates. The implications of alternative assumptions and approximations are considered. Some inaccuracies in tuition material of the Faculty and Institute of Actuaries and articles written about exposed-to-risk are highlighted. Other problem areas, such as the bias of calculated rates and estimation under policy and calendar year rate intervals, are also considered

    Topics in learning sparse and low-rank models of non-negative data

    Get PDF
    Advances in information and measurement technology have led to a surge in prevalence of high-dimensional data. Sparse and low-rank modeling can both be seen as techniques of dimensionality reduction, which is essential for obtaining compact and interpretable representations of such data. In this thesis, we investigate aspects of sparse and low-rank modeling in conjunction with non-negative data or non-negativity constraints. The first part is devoted to the problem of learning sparse non-negative representations, with a focus on how non-negativity can be taken advantage of. We work out a detailed analysis of non-negative least squares regression, showing that under certain conditions sparsity-promoting regularization, the approach advocated paradigmatically over the past years, is not required. Our results have implications for problems in signal processing such as compressed sensing and spike train deconvolution. In the second part, we consider the problem of factorizing a given matrix into two factors of low rank, out of which one is binary. We devise a provably correct algorithm computing such factorization whose running time is exponential only in the rank of the factorization, but linear in the dimensions of the input matrix. Our approach is extended to noisy settings and applied to an unmixing problem in DNA methylation array analysis. On the theoretical side, we relate the uniqueness of the factorization to Littlewood-Offord theory in combinatorics.Fortschritte in Informations- und Messtechnologie fĂŒhren zu erhöhtem Vorkommen hochdimensionaler Daten. ModellierungsansĂ€tze basierend auf Sparsity oder niedrigem Rang können als Dimensionsreduktion betrachtet werden, die notwendig ist, um kompakte und interpretierbare Darstellungen solcher Daten zu erhalten. In dieser Arbeit untersuchen wir Aspekte dieser AnsĂ€tze in Verbindung mit nichtnegativen Daten oder NichtnegativitĂ€tsbeschrĂ€nkungen. Der erste Teil handelt vom Lernen nichtnegativer sparsamer Darstellungen, mit einem Schwerpunkt darauf, wie NichtnegativitĂ€t ausgenutzt werden kann. Wir analysieren nichtnegative kleinste Quadrate im Detail und zeigen, dass unter gewissen Bedingungen Sparsity-fördernde Regularisierung - der in den letzten Jahren paradigmatisch enpfohlene Ansatz - nicht notwendig ist. Unsere Resultate haben Auswirkungen auf Probleme in der Signalverarbeitung wie Compressed Sensing und die Entfaltung von Pulsfolgen. Im zweiten Teil betrachten wir das Problem, eine Matrix in zwei Faktoren mit niedrigem Rang, von denen einer binĂ€r ist, zu zerlegen. Wir entwickeln dafĂŒr einen Algorithmus, dessen Laufzeit nur exponentiell in dem Rang der Faktorisierung, aber linear in den Dimensionen der gegebenen Matrix ist. Wir erweitern unseren Ansatz fĂŒr verrauschte Szenarien und wenden ihn zur Analyse von DNA-Methylierungsdaten an. Auf theoretischer Ebene setzen wir die Eindeutigkeit der Faktorisierung in Beziehung zur Littlewood-Offord-Theorie aus der Kombinatorik
    • 

    corecore