56,516 research outputs found
Scaled Sparse Linear Regression
Scaled sparse linear regression jointly estimates the regression coefficients
and noise level in a linear model. It chooses an equilibrium with a sparse
regression method by iteratively estimating the noise level via the mean
residual square and scaling the penalty in proportion to the estimated noise
level. The iterative algorithm costs little beyond the computation of a path or
grid of the sparse regression estimator for penalty levels above a proper
threshold. For the scaled lasso, the algorithm is a gradient descent in a
convex minimization of a penalized joint loss function for the regression
coefficients and noise level. Under mild regularity conditions, we prove that
the scaled lasso simultaneously yields an estimator for the noise level and an
estimated coefficient vector satisfying certain oracle inequalities for
prediction, the estimation of the noise level and the regression coefficients.
These inequalities provide sufficient conditions for the consistency and
asymptotic normality of the noise level estimator, including certain cases
where the number of variables is of greater order than the sample size.
Parallel results are provided for the least squares estimation after model
selection by the scaled lasso. Numerical results demonstrate the superior
performance of the proposed methods over an earlier proposal of joint convex
minimization.Comment: 20 page
Sparse Matrix Inversion with Scaled Lasso
We propose a new method of learning a sparse nonnegative-definite target
matrix. Our primary example of the target matrix is the inverse of a population
covariance or correlation matrix. The algorithm first estimates each column of
the target matrix by the scaled Lasso and then adjusts the matrix estimator to
be symmetric. The penalty level of the scaled Lasso for each column is
completely determined by data via convex minimization, without using
cross-validation.
We prove that this scaled Lasso method guarantees the fastest proven rate of
convergence in the spectrum norm under conditions of weaker form than those in
the existing analyses of other regularized algorithms, and has faster
guaranteed rate of convergence when the ratio of the and spectrum
norms of the target inverse matrix diverges to infinity. A simulation study
demonstrates the computational feasibility and superb performance of the
proposed method.
Our analysis also provides new performance bounds for the Lasso and scaled
Lasso to guarantee higher concentration of the error at a smaller threshold
level than previous analyses, and to allow the use of the union bound in
column-by-column applications of the scaled Lasso without an adjustment of the
penalty level. In addition, the least squares estimation after the scaled Lasso
selection is considered and proven to guarantee performance bounds similar to
that of the scaled Lasso
Some congruences involving binomial coefficients
Binomial coefficients and central trinomial coefficients play important roles
in combinatorics. Let be a prime. We show that where the central trinomial coefficient
is the constant term in the expansion of . We also prove three
congruences modulo conjectured by Sun, one of which is
In addition, we get some new combinatorial
identities.Comment: 9 pages, final published versio
- …
