4,813 research outputs found

    Non-parametric linear time-invariant system identification by discrete wavelet transforms

    No full text
    We describe the use of the discrete wavelet transform (DWT) for non-parametric linear time-invariant system identification. Identification is achieved by using a test excitation to the system under test (SUT) that also acts as the analyzing function for the DWT of the SUT's output, so as to recover the impulse response. The method uses as excitation any signal that gives an orthogonal inner product in the DWT at some step size (that cannot be 1). We favor wavelet scaling coefficients as excitations, with a step size of 2. However, the system impulse or frequency response can then only be estimated at half the available number of points of the sampled output sequence, introducing a multirate problem that means we have to 'oversample' the SUT output. The method has several advantages over existing techniques, e.g., it uses a simple, easy to generate excitation, and avoids the singularity problems and the (unbounded) accumulation of round-off errors that can occur with standard techniques. In extensive simulations, identification of a variety of finite and infinite impulse response systems is shown to be considerably better than with conventional system identification methods.Department of Computin

    Pipelined genetic propagation

    Get PDF
    Ā© 2015 IEEE.Genetic Algorithms (GAs) are a class of numerical and combinatorial optimisers which are especially useful for solving complex non-linear and non-convex problems. However, the required execution time often limits their application to small-scale or latency-insensitive problems, so techniques to increase the computational efficiency of GAs are needed. FPGA-based acceleration has significant potential for speeding up genetic algorithms, but existing FPGA GAs are limited by the generational approaches inherited from software GAs. Many parts of the generational approach do not map well to hardware, such as the large shared population memory and intrinsic loop-carried dependency. To address this problem, this paper proposes a new hardware-oriented approach to GAs, called Pipelined Genetic Propagation (PGP), which is intrinsically distributed and pipelined. PGP represents a GA solver as a graph of loosely coupled genetic operators, which allows the solution to be scaled to the available resources, and also to dynamically change topology at run-time to explore different solution strategies. Experiments show that pipelined genetic propagation is effective in solving seven different applications. Our PGP design is 5 times faster than a recent FPGA-based GA system, and 90 times faster than a CPU-based GA system

    Optimising Sparse Matrix Vector multiplication for large scale FEM problems on FPGA

    Get PDF
    Sparse Matrix Vector multiplication (SpMV) is an important kernel in many scientific applications. In this work we propose an architecture and an automated customisation method to detect and optimise the architecture for block diagonal sparse matrices. We evaluate the proposed approach in the context of the spectral/hp Finite Element Method, using the local matrix assembly approach. This problem leads to a large sparse system of linear equations with block diagonal matrix which is typically solved using an iterative method such as the Preconditioned Conjugate Gradient. The efficiency of the proposed architecture combined with the effectiveness of the proposed customisation method reduces BRAM resource utilisation by as much as 10 times, while achieving identical throughput with existing state of the art designs and requiring minimal development effort from the end user. In the context of the Finite Element Method, our approach enables the solution of larger problems than previously possible, enabling the applicability of FPGAs to more interesting HPC problems

    An efficient sparse conjugate gradient solver using a BeneÅ” permutation network

    Get PDF
    Ā© 2014 Technical University of Munich (TUM).The conjugate gradient (CG) is one of the most widely used iterative methods for solving systems of linear equations. However, parallelizing CG for large sparse systems is difficult due to the inherent irregularity in memory access pattern. We propose a novel processor architecture for the sparse conjugate gradient method. The architecture consists of multiple processing elements and memory banks, and is able to compute efficiently both sparse matrix-vector multiplication, and other dense vector operations. A BeneÅ” permutation network with an optimised control scheme is introduced to reduce memory bank conflicts without expensive logic. We describe a heuristics for offline scheduling, the effect of which is captured in a parametric model for estimating the performance of designs generated from our approach

    Power-Adaptive Computing System Design for Solar-Energy-Powered Embedded Systems

    Get PDF

    Multiplierless Algorithm for Multivariate Gaussian Random Number Generation in FPGAs

    Get PDF
    • …
    corecore