105 research outputs found

    Sparse Solution of Underdetermined Linear Equations via Adaptively Iterative Thresholding

    Full text link
    Finding the sparset solution of an underdetermined system of linear equations y=Axy=Ax has attracted considerable attention in recent years. Among a large number of algorithms, iterative thresholding algorithms are recognized as one of the most efficient and important classes of algorithms. This is mainly due to their low computational complexities, especially for large scale applications. The aim of this paper is to provide guarantees on the global convergence of a wide class of iterative thresholding algorithms. Since the thresholds of the considered algorithms are set adaptively at each iteration, we call them adaptively iterative thresholding (AIT) algorithms. As the main result, we show that as long as AA satisfies a certain coherence property, AIT algorithms can find the correct support set within finite iterations, and then converge to the original sparse solution exponentially fast once the correct support set has been identified. Meanwhile, we also demonstrate that AIT algorithms are robust to the algorithmic parameters. In addition, it should be pointed out that most of the existing iterative thresholding algorithms such as hard, soft, half and smoothly clipped absolute deviation (SCAD) algorithms are included in the class of AIT algorithms studied in this paper.Comment: 33 pages, 1 figur

    Model selection of polynomial kernel regression

    Full text link
    Polynomial kernel regression is one of the standard and state-of-the-art learning strategies. However, as is well known, the choices of the degree of polynomial kernel and the regularization parameter are still open in the realm of model selection. The first aim of this paper is to develop a strategy to select these parameters. On one hand, based on the worst-case learning rate analysis, we show that the regularization term in polynomial kernel regression is not necessary. In other words, the regularization parameter can decrease arbitrarily fast when the degree of the polynomial kernel is suitable tuned. On the other hand,taking account of the implementation of the algorithm, the regularization term is required. Summarily, the effect of the regularization term in polynomial kernel regression is only to circumvent the " ill-condition" of the kernel matrix. Based on this, the second purpose of this paper is to propose a new model selection strategy, and then design an efficient learning algorithm. Both theoretical and experimental analysis show that the new strategy outperforms the previous one. Theoretically, we prove that the new learning strategy is almost optimal if the regression function is smooth. Experimentally, it is shown that the new strategy can significantly reduce the computational burden without loss of generalization capability.Comment: 29 pages, 4 figure

    Improved Ensemble Empirical Mode Decomposition and its Applications to Gearbox Fault Signal Processing

    Get PDF
    Abstract Ensemble empirical mode decomposition (EEMD) is a noiseassisted method and also a significant improvement on empirical mode decomposition (EMD). However, the EEMD method lacks a guide to choosing the appropriate amplitude of added noise and its computation efficiency is fairly low. To alleviate the problems of the EEMD method, the improved complementary EEMD method (ICEEMD) was proposed. Furthermore, the ICEEMD method was used to analyze realistic gearbox faulty signals. The results indicate that the ICEEMD method has some advantages over the EEMD method in alleviating the mode mixing and splitting as well as reducing the time cost and also outperforms the CEEMD method in alleviating the mode mixing and splitting. The paper also indicates that the ICEEMD method seems to be an effective and efficient method for processing gearbox fault signals
    • …
    corecore