4,838 research outputs found

    Adaptive filters for sparse system identification

    Get PDF
    Sparse system identification has attracted much attention in the field of adaptive algorithms, and the adaptive filters for sparse system identification are studied. Firstly, a new family of proportionate normalized least mean square (PNLMS) adaptive algorithms that improve the performance of identifying block-sparse systems is proposed. The main proposed algorithm, called block-sparse PNLMS (BS-PNLMS), is based on the optimization of a mixed â„“2,1 norm of the adaptive filter\u27s coefficients. A block-sparse improved PNLMS (BS-IPNLMS) is also derived for both sparse and dispersive impulse responses. Meanwhile, the proposed block-sparse proportionate idea has been extended to both the proportionate affine projection algorithm (PAPA) and the proportionate affine projection sign algorithm (PAPSA). Secondly, a generalized scheme for a family of proportionate algorithms is also presented based on convex optimization. Then a novel low-complexity reweighted PAPA is derived from this generalized scheme which could achieve both better performance and lower complexity than previous ones. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter\u27s coefficients is combined with row action projections (RAP) to significantly reduce the computational complexity. Finally, two variable step-size zero-point attracting projection (VSS-ZAP) algorithms for sparse system identification are proposed. The proposed VSS-ZAPs are based on the approximations of the difference between the sparseness measure of current filter coefficients and the real channel, which could gain lower steady-state misalignment and also track the change in the sparse system --Abstract, page iv

    Doubly Robust Smoothing of Dynamical Processes via Outlier Sparsity Constraints

    Full text link
    Coping with outliers contaminating dynamical processes is of major importance in various applications because mismatches from nominal models are not uncommon in practice. In this context, the present paper develops novel fixed-lag and fixed-interval smoothing algorithms that are robust to outliers simultaneously present in the measurements {\it and} in the state dynamics. Outliers are handled through auxiliary unknown variables that are jointly estimated along with the state based on the least-squares criterion that is regularized with the â„“1\ell_1-norm of the outliers in order to effect sparsity control. The resultant iterative estimators rely on coordinate descent and the alternating direction method of multipliers, are expressed in closed form per iteration, and are provably convergent. Additional attractive features of the novel doubly robust smoother include: i) ability to handle both types of outliers; ii) universality to unknown nominal noise and outlier distributions; iii) flexibility to encompass maximum a posteriori optimal estimators with reliable performance under nominal conditions; and iv) improved performance relative to competing alternatives at comparable complexity, as corroborated via simulated tests.Comment: Submitted to IEEE Trans. on Signal Processin

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Profile Likelihood Biclustering

    Full text link
    Biclustering, the process of simultaneously clustering the rows and columns of a data matrix, is a popular and effective tool for finding structure in a high-dimensional dataset. Many biclustering procedures appear to work well in practice, but most do not have associated consistency guarantees. To address this shortcoming, we propose a new biclustering procedure based on profile likelihood. The procedure applies to a broad range of data modalities, including binary, count, and continuous observations. We prove that the procedure recovers the true row and column classes when the dimensions of the data matrix tend to infinity, even if the functional form of the data distribution is misspecified. The procedure requires computing a combinatorial search, which can be expensive in practice. Rather than performing this search directly, we propose a new heuristic optimization procedure based on the Kernighan-Lin heuristic, which has nice computational properties and performs well in simulations. We demonstrate our procedure with applications to congressional voting records, and microarray analysis.Comment: 40 pages, 11 figures; R package in development at https://github.com/patperry/biclustp
    • …
    corecore