205 research outputs found
Sparse Solution of Underdetermined Linear Equations via Adaptively Iterative Thresholding
Finding the sparset solution of an underdetermined system of linear equations
has attracted considerable attention in recent years. Among a large
number of algorithms, iterative thresholding algorithms are recognized as one
of the most efficient and important classes of algorithms. This is mainly due
to their low computational complexities, especially for large scale
applications. The aim of this paper is to provide guarantees on the global
convergence of a wide class of iterative thresholding algorithms. Since the
thresholds of the considered algorithms are set adaptively at each iteration,
we call them adaptively iterative thresholding (AIT) algorithms. As the main
result, we show that as long as satisfies a certain coherence property, AIT
algorithms can find the correct support set within finite iterations, and then
converge to the original sparse solution exponentially fast once the correct
support set has been identified. Meanwhile, we also demonstrate that AIT
algorithms are robust to the algorithmic parameters. In addition, it should be
pointed out that most of the existing iterative thresholding algorithms such as
hard, soft, half and smoothly clipped absolute deviation (SCAD) algorithms are
included in the class of AIT algorithms studied in this paper.Comment: 33 pages, 1 figur
Model selection of polynomial kernel regression
Polynomial kernel regression is one of the standard and state-of-the-art
learning strategies. However, as is well known, the choices of the degree of
polynomial kernel and the regularization parameter are still open in the realm
of model selection. The first aim of this paper is to develop a strategy to
select these parameters. On one hand, based on the worst-case learning rate
analysis, we show that the regularization term in polynomial kernel regression
is not necessary. In other words, the regularization parameter can decrease
arbitrarily fast when the degree of the polynomial kernel is suitable tuned. On
the other hand,taking account of the implementation of the algorithm, the
regularization term is required. Summarily, the effect of the regularization
term in polynomial kernel regression is only to circumvent the " ill-condition"
of the kernel matrix. Based on this, the second purpose of this paper is to
propose a new model selection strategy, and then design an efficient learning
algorithm. Both theoretical and experimental analysis show that the new
strategy outperforms the previous one. Theoretically, we prove that the new
learning strategy is almost optimal if the regression function is smooth.
Experimentally, it is shown that the new strategy can significantly reduce the
computational burden without loss of generalization capability.Comment: 29 pages, 4 figure
- …