3,513 research outputs found

    A field programmable gate array based modular motion control platform

    Get PDF
    The expectations from motion control systems have been rising day by day. As the systems become more complex, conventional motion control systems can not achieve to meet all the specifications with optimized results. This creates the necessity of fundamental changes in the infrastructure of the system. Field programmable gate array (FPGA) technology enables the reconfiguration of the digital hardware, thus dissolving the necessity of infrastructural changes for minor manipulations in the hardware even if the system is deployed. An FPGA based hardware system shrinks the size of the hardware hence the cost. FPGAs also provide better power ratings for the systems as well as a more reliable system with improved performance. As a trade off, the development is rather more difficult than software based systems, which also affects the research and development time of the overall system. In this paper a level of abstraction is introduced in order to diminish the requirement of advanced hardware description language (HDL) knowledge for implementing motion control systems thoroughly on an FPGA. The intellectual property library consists of synthesizable hardware modules specifically implemented for motion control purposes. Other parts of a motion control system, like user interface and trajectory generation, are implemented as software functions in order to protect the modularity of the system. There are also several external hardware designs for interfacing and driving various types of actuators

    Predictive control using an FPGA with application to aircraft control

    Get PDF
    Alternative and more efficient computational methods can extend the applicability of MPC to systems with tight real-time requirements. This paper presents a “system-on-a-chip” MPC system, implemented on a field programmable gate array (FPGA), consisting of a sparse structure-exploiting primal dual interior point (PDIP) QP solver for MPC reference tracking and a fast gradient QP solver for steady-state target calculation. A parallel reduced precision iterative solver is used to accelerate the solution of the set of linear equations forming the computational bottleneck of the PDIP algorithm. A numerical study of the effect of reducing the number of iterations highlights the effectiveness of the approach. The system is demonstrated with an FPGA-inthe-loop testbench controlling a nonlinear simulation of a large airliner. This study considers many more manipulated inputs than any previous FPGA-based MPC implementation to date, yet the implementation comfortably fits into a mid-range FPGA, and the controller compares well in terms of solution quality and latency to state-of-the-art QP solvers running on a standard PC

    A biophysically accurate floating point somatic neuroprocessor

    Get PDF

    Fast HUB Floating-point Adder for FPGA

    Get PDF
    Several previous publications have shown the area and delay reduction when implementing real number computation using HUB formats for both floating-point and fixed-point. In this paper, we present a HUB floating-point adder for FPGA which greatly improves the speed of previous proposed HUB designs for these devices. Our architecture is based on the double path technique which reduces the execution time since each path works in parallel. We also deal with the implementation of unbiased rounding in the proposed adder. Experimental results are presented showing the goodness of the new HUB adder for FPGA.TIN2016- 80920-R, JA2012 P12-TIC-1692, JA2012 P12-TIC-147

    Training deep neural networks with low precision multiplications

    Full text link
    Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training. We find that very low precision is sufficient not just for running trained networks but also for training them. For example, it is possible to train Maxout networks with 10 bits multiplications.Comment: 10 pages, 5 figures, Accepted as a workshop contribution at ICLR 201
    • …
    corecore