25,846 research outputs found
A computationally efficient implementation of quadratic time-frequency distributions
Time-frequency distributions (TFDs) are computationally intensive methods. A very common class of TFDs, namely quadratic TFDs, is obtained by time-frequency (TF) smoothing the Wigner Ville distribution (WVD). In this paper a computationally efficient implementation of this class of TFDs is presented. In order to avoid artifacts caused by circular convolution, linear convolution is applied in both the time and frequency directions. Four different kernel types are identified and separate optimised implementations are presented for each kernel type. The computational complexity is presented for the different kernel types
High-Performance FPGA Implementation of Equivariant Adaptive Separation via Independence Algorithm for Independent Component Analysis
Independent Component Analysis (ICA) is a dimensionality reduction technique
that can boost efficiency of machine learning models that deal with probability
density functions, e.g. Bayesian neural networks. Algorithms that implement
adaptive ICA converge slower than their nonadaptive counterparts, however, they
are capable of tracking changes in underlying distributions of input features.
This intrinsically slow convergence of adaptive methods combined with existing
hardware implementations that operate at very low clock frequencies necessitate
fundamental improvements in both algorithm and hardware design. This paper
presents an algorithm that allows efficient hardware implementation of ICA.
Compared to previous work, our FPGA implementation of adaptive ICA improves
clock frequency by at least one order of magnitude and throughput by at least
two orders of magnitude. Our proposed algorithm is not limited to ICA and can
be used in various machine learning problems that use stochastic gradient
descent optimization
Approximating Probability Densities by Iterated Laplace Approximations
The Laplace approximation is an old, but frequently used method to
approximate integrals for Bayesian calculations. In this paper we develop an
extension of the Laplace approximation, by applying it iteratively to the
residual, i.e., the difference between the current approximation and the true
function. The final approximation is thus a linear combination of multivariate
normal densities, where the coefficients are chosen to achieve a good fit to
the target distribution. We illustrate on real and artificial examples that the
proposed procedure is a computationally efficient alternative to current
approaches for approximation of multivariate probability densities. The
R-package iterLap implementing the methods described in this article is
available from the CRAN servers.Comment: to appear in Journal of Computational and Graphical Statistics,
http://pubs.amstat.org/loi/jcg
- …