56,516 research outputs found

    Scaled Sparse Linear Regression

    Full text link
    Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual square and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs little beyond the computation of a path or grid of the sparse regression estimator for penalty levels above a proper threshold. For the scaled lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the scaled lasso simultaneously yields an estimator for the noise level and an estimated coefficient vector satisfying certain oracle inequalities for prediction, the estimation of the noise level and the regression coefficients. These inequalities provide sufficient conditions for the consistency and asymptotic normality of the noise level estimator, including certain cases where the number of variables is of greater order than the sample size. Parallel results are provided for the least squares estimation after model selection by the scaled lasso. Numerical results demonstrate the superior performance of the proposed methods over an earlier proposal of joint convex minimization.Comment: 20 page

    Sparse Matrix Inversion with Scaled Lasso

    Full text link
    We propose a new method of learning a sparse nonnegative-definite target matrix. Our primary example of the target matrix is the inverse of a population covariance or correlation matrix. The algorithm first estimates each column of the target matrix by the scaled Lasso and then adjusts the matrix estimator to be symmetric. The penalty level of the scaled Lasso for each column is completely determined by data via convex minimization, without using cross-validation. We prove that this scaled Lasso method guarantees the fastest proven rate of convergence in the spectrum norm under conditions of weaker form than those in the existing analyses of other 1\ell_1 regularized algorithms, and has faster guaranteed rate of convergence when the ratio of the 1\ell_1 and spectrum norms of the target inverse matrix diverges to infinity. A simulation study demonstrates the computational feasibility and superb performance of the proposed method. Our analysis also provides new performance bounds for the Lasso and scaled Lasso to guarantee higher concentration of the error at a smaller threshold level than previous analyses, and to allow the use of the union bound in column-by-column applications of the scaled Lasso without an adjustment of the penalty level. In addition, the least squares estimation after the scaled Lasso selection is considered and proven to guarantee performance bounds similar to that of the scaled Lasso

    Some congruences involving binomial coefficients

    Full text link
    Binomial coefficients and central trinomial coefficients play important roles in combinatorics. Let p>3p>3 be a prime. We show that Tp1(p3)3p1 (modp2),T_{p-1}\equiv\left(\frac p3\right)3^{p-1}\ \pmod{p^2}, where the central trinomial coefficient TnT_n is the constant term in the expansion of (1+x+x1)n(1+x+x^{-1})^n. We also prove three congruences modulo p3p^3 conjectured by Sun, one of which is k=0p1(p1k)(2kk)((1)k(3)k)(p3)(3p11) (modp3).\sum_{k=0}^{p-1}\binom{p-1}k\binom{2k}k((-1)^k-(-3)^{-k})\equiv \left(\frac p3\right)(3^{p-1}-1)\ \pmod{p^3}. In addition, we get some new combinatorial identities.Comment: 9 pages, final published versio
    corecore