3 research outputs found
An upper bound on norms of noisy functions
Let be the noise operator acting on functions on the boolean
cube . Let be a nonnegative function on and let . We upper bound the norm of by the average
norm of conditional expectations of , given sets of roughly
variables, where is an explicitly defined
function of .
We describe some applications for error-correcting codes and for matroids. In
particular, we derive an upper bound on the weight distribution of duals of
BEC-capacity achieving binary linear codes. This improves the known bounds on
the linear-weight components of the weight distribution of constant rate binary
Reed-Muller codes for almost all rates.Comment: A new version with some improved bound
A Rank-Corrected Procedure for Matrix Completion with Fixed Basis Coefficients
For the problems of low-rank matrix completion, the efficiency of the
widely-used nuclear norm technique may be challenged under many circumstances,
especially when certain basis coefficients are fixed, for example, the low-rank
correlation matrix completion in various fields such as the financial market
and the low-rank density matrix completion from the quantum state tomography.
To seek a solution of high recovery quality beyond the reach of the nuclear
norm, in this paper, we propose a rank-corrected procedure using a nuclear
semi-norm to generate a new estimator. For this new estimator, we establish a
non-asymptotic recovery error bound. More importantly, we quantify the
reduction of the recovery error bound for this rank-corrected procedure.
Compared with the one obtained for the nuclear norm penalized least squares
estimator, this reduction can be substantial (around 50%). We also provide
necessary and sufficient conditions for rank consistency in the sense of Bach
(2008). Very interestingly, these conditions are highly related to the concept
of constraint nondegeneracy in matrix optimization. As a byproduct, our results
provide a theoretical foundation for the majorized penalty method of Gao and
Sun (2010) and Gao (2010) for structured low-rank matrix optimization problems.
Extensive numerical experiments demonstrate that our proposed rank-corrected
procedure can simultaneously achieve a high recovery accuracy and capture the
low-rank structure.Comment: 51 pages, 4 figure
Error Bounds for Generalized Group Sparsity
In high-dimensional statistical inference, sparsity regularizations have
shown advantages in consistency and convergence rates for coefficient
estimation. We consider a generalized version of Sparse-Group Lasso which
captures both element-wise sparsity and group-wise sparsity simultaneously. We
state one universal theorem which is proved to obtain results on consistency
and convergence rates for different forms of double sparsity regularization.
The universality of the results lies in an generalization of various
convergence rates for single regularization cases such as LASSO and group LASSO
and also double regularization cases such as sparse-group LASSO. Our analysis
identifies a generalized norm of -norm, which provides a dual
formulation for our double sparsity regularization.Comment: 23 pages, 2 figures. arXiv admin note: text overlap with
arXiv:2006.0617