7,414 research outputs found

    High-Performance FPGA Implementation of Equivariant Adaptive Separation via Independence Algorithm for Independent Component Analysis

    Full text link
    Independent Component Analysis (ICA) is a dimensionality reduction technique that can boost efficiency of machine learning models that deal with probability density functions, e.g. Bayesian neural networks. Algorithms that implement adaptive ICA converge slower than their nonadaptive counterparts, however, they are capable of tracking changes in underlying distributions of input features. This intrinsically slow convergence of adaptive methods combined with existing hardware implementations that operate at very low clock frequencies necessitate fundamental improvements in both algorithm and hardware design. This paper presents an algorithm that allows efficient hardware implementation of ICA. Compared to previous work, our FPGA implementation of adaptive ICA improves clock frequency by at least one order of magnitude and throughput by at least two orders of magnitude. Our proposed algorithm is not limited to ICA and can be used in various machine learning problems that use stochastic gradient descent optimization

    Stochastic trapping in a solvable model of on-line independent component analysis

    Full text link
    Previous analytical studies of on-line Independent Component Analysis (ICA) learning rules have focussed on asymptotic stability and efficiency. In practice the transient stages of learning will often be more significant in determining the success of an algorithm. This is demonstrated here with an analysis of a Hebbian ICA algorithm which can find a small number of non-Gaussian components given data composed of a linear mixture of independent source signals. An idealised data model is considered in which the sources comprise a number of non-Gaussian and Gaussian sources and a solution to the dynamics is obtained in the limit where the number of Gaussian sources is infinite. Previous stability results are confirmed by expanding around optimal fixed points, where a closed form solution to the learning dynamics is obtained. However, stochastic effects are shown to stabilise otherwise unstable sub-optimal fixed points. Conditions required to destabilise one such fixed point are obtained for the case of a single non-Gaussian component, indicating that the initial learning rate \eta required to successfully escape is very low (\eta = O(N^{-2}) where N is the data dimension) resulting in very slow learning typically requiring O(N^3) iterations. Simulations confirm that this picture holds for a finite system.Comment: 17 pages, 3 figures. To appear in Neural Computatio

    Angular CMA: A modified Constant Modulus Algorithm providing steering angle updates

    Get PDF
    Conventional blind beamforming algorithms have no direct notion of the physical Direction of Arrival angle of an impinging signal. These blind adaptive algorithms operate by adjusting the complex steering vector in the case of changing signal conditions and directions. This paper presents Angular CMA, a blind beamforming method that calculates steering angle updates (instead of weight vector updates) to keep track of the desired signal. Angular CMA and its respective steering angle updates are particularly useful in the context of mixed-signal hierarchical arrays as means to find and distribute steering parameters. Simulations of Angular CMA show promising convergence behaviour, while having a lower complexity than alternative methods (e.g., MUSIC)

    A class of constant modulus algorithms for uniform linear arrays with a conjugate symmetric constraint

    Get PDF
    A class of constant modulus algorithms (CMAs) subject to a conjugate symmetric constraint is proposed for blind beamforming based on the uniform linear array structure. The constraint is derived from the beamformer with an optimum output signal-to-interference-plus-noise ratio (SINR). The effect of the additional constraint is equivalent to adding a second step to the original adaptive algorithms. The proposed approach is general and can be applied to both the traditional CMA and its all kinds of variants, such as the linearly constrained CMA (LCCMA) and the least squares CMA (LSCMA) as two examples. With this constraint, the modified CMAs will always generate a weight vector in the desired form for each update and the number of adaptive variables is effectively reduced by half, leading to a much improved overall performance. (C) 2010 Elsevier B.V. All rights reserved

    Adaptive Langevin Sampler for Separation of t-Distribution Modelled Astrophysical Maps

    Full text link
    We propose to model the image differentials of astrophysical source maps by Student's t-distribution and to use them in the Bayesian source separation method as priors. We introduce an efficient Markov Chain Monte Carlo (MCMC) sampling scheme to unmix the astrophysical sources and describe the derivation details. In this scheme, we use the Langevin stochastic equation for transitions, which enables parallel drawing of random samples from the posterior, and reduces the computation time significantly (by two orders of magnitude). In addition, Student's t-distribution parameters are updated throughout the iterations. The results on astrophysical source separation are assessed with two performance criteria defined in the pixel and the frequency domains.Comment: 12 pages, 6 figure
    • 

    corecore