687 research outputs found

    A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

    Full text link
    Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both non-adaptive and adaptive filter identification problems

    Study of L0-norm constraint normalized subband adaptive filtering algorithm

    Full text link
    Limited by fixed step-size and sparsity penalty factor, the conventional sparsity-aware normalized subband adaptive filtering (NSAF) type algorithms suffer from trade-off requirements of high filtering accurateness and quicker convergence behavior. To deal with this problem, this paper proposes variable step-size L0-norm constraint NSAF algorithms (VSS-L0-NSAFs) for sparse system identification. We first analyze mean-square-deviation (MSD) statistics behavior of the L0-NSAF algorithm innovatively in according to a novel recursion form and arrive at corresponding expressions for the cases that background noise variance is available and unavailable, where correlation degree of system input is indicated by scaling parameter r. Based on derivations, we develop an effective variable step-size scheme through minimizing the upper bounds of the MSD under some reasonable assumptions and lemma. To realize performance improvement, an effective reset strategy is incorporated into presented algorithms to tackle with non-stationary situations. Finally, numerical simulations corroborate that the proposed algorithms achieve better performance in terms of estimation accurateness and tracking capability in comparison with existing related algorithms in sparse system identification and adaptive echo cancellation circumstances.Comment: 15 pages,15 figure

    Analyzing sparse dictionaries for online learning with kernels

    Full text link
    Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.Comment: 10 page
    • …
    corecore