560 research outputs found

    Low power digital signal processing

    Get PDF

    Ant colony optimization on runtime reconfigurable architectures

    Get PDF

    Energy-Efficiency in Optical Networks

    Get PDF

    On FPGA implementations for bioinformatics, neural prosthetics and reinforcement learning problems.

    Get PDF
    Mak Sui Tung Terrence.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 132-142).Abstracts in English and Chinese.Abstract --- p.iList of Tables --- p.ivList of Figures --- p.vAcknowledgements --- p.ixChapter 1. --- Introduction --- p.1Chapter 1.1 --- Bioinformatics --- p.1Chapter 1.2 --- Neural Prosthetics --- p.4Chapter 1.3 --- Learning in Uncertainty --- p.5Chapter 1.4 --- The Field Programmable Gate Array (FPGAs) --- p.7Chapter 1.5 --- Scope of the Thesis --- p.10Chapter 2. --- A Hybrid GA-DP Approach for Searching Equivalence Sets --- p.14Chapter 2.1 --- Introduction --- p.16Chapter 2.2 --- Equivalence Set Criterion --- p.18Chapter 2.3 --- Genetic Algorithm and Dynamic Programming --- p.19Chapter 2.3.1 --- Genetic Algorithm Formulation --- p.20Chapter 2.3.2 --- Bounded Mutation --- p.21Chapter 2.3.3 --- Conditioned Crossover --- p.22Chapter 2.3.4 --- Implementation --- p.22Chapter 2.4 --- FPGAs Implementation of GA-DP --- p.24Chapter 2.4.1 --- System Overview --- p.25Chapter 2.4.2 --- Parallel Computation for Transitive Closure --- p.26Chapter 2.4.3 --- Genetic Operation Realization --- p.28Chapter 2.5 --- Discussion --- p.30Chapter 2.6 --- Limitation and Future Work --- p.33Chapter 2.7 --- Conclusion --- p.34Chapter 3. --- An FPGA-based Architecture for Maximum-Likelihood Phylogeny Evaluation --- p.35Chapter 3.1 --- Introduction --- p.36Chapter 3.2 --- Maximum-Likelihood Model --- p.39Chapter 3.3 --- Hardware Mapping for Pruning Algorithm --- p.41Chapter 3.3.1 --- Related Works --- p.41Chapter 3.3.2 --- Number Representation --- p.42Chapter 3.3.3 --- Binary Tree Representation --- p.43Chapter 3.3.4 --- Binary Tree Traversal --- p.45Chapter 3.3.5 --- Maximum-Likelihood Evaluation Algorithm --- p.46Chapter 3.4 --- System Architecture --- p.49Chapter 3.4.1 --- Transition Probability Unit --- p.50Chapter 3.4.2 --- State-Parallel Computation Unit --- p.51Chapter 3.4.3 --- Error Computation --- p.54Chapter 3.5 --- Discussion --- p.56Chapter 3.5.1 --- Hardware Resource Consumption --- p.56Chapter 3.5.2 --- Delay Evaluation --- p.57Chapter 3.6 --- Conclusion --- p.59Chapter 4. --- Field Programmable Gate Array Implementation of Neuronal Ion Channel Dynamics --- p.61Chapter 4.1 --- Introduction --- p.62Chapter 4.2 --- Background --- p.63Chapter 4.2.1 --- Analog VLSI Model for Hebbian Synapse --- p.63Chapter 4.2.2 --- A Unifying Model of Bi-directional Synaptic Plasticity --- p.64Chapter 4.2.3 --- Non-NMDA Receptor Channel Regulation --- p.65Chapter 4.3 --- FPGAs Implementation --- p.65Chapter 4.3.1 --- FPGA Design Flow --- p.65Chapter 4.3.2 --- Digital Model of NMD A and AMPA receptors --- p.65Chapter 4.3.3 --- Synapse Modification --- p.67Chapter 4.4 --- Results --- p.68Chapter 4.4.1 --- Simulation Results --- p.68Chapter 4.5 --- Discussion --- p.70Chapter 4.6 --- Conclusion --- p.71Chapter 5. --- Continuous-Time and Discrete-Time Inference Networks for Distributed Dynamic Programming --- p.72Chapter 5.1 --- Introduction --- p.74Chapter 5.2 --- Background --- p.77Chapter 5.2.1 --- Markov decision process (MDPs) --- p.78Chapter 5.2.2 --- Learning in the MDPs --- p.80Chapter 5.2.3 --- Bellman Optimal Criterion --- p.80Chapter 5.2.4 --- Value Iteration --- p.81Chapter 5.3 --- A Computational Framework for Continuous-Time Inference Network --- p.82Chapter 5.3.1 --- Binary Relation Inference Network --- p.83Chapter 5.3.2 --- Binary Relation Inference Network for MDPs --- p.85Chapter 5.3.3 --- Continuous-Time Inference Network for MDPs --- p.87Chapter 5.4 --- Convergence Consideration --- p.88Chapter 5.5 --- Numerical Simulation --- p.90Chapter 5.5.1 --- Example 1: Random Walk --- p.90Chapter 5.5.2 --- Example 2: Random Walk on a Grid --- p.94Chapter 5.5.3 --- Example 3: Stochastic Shortest Path Problem --- p.97Chapter 5.5.4 --- Relationships Between λ and γ --- p.99Chapter 5.6 --- Discrete-Time Inference Network --- p.100Chapter 5.6.1 --- Results --- p.101Chapter 5.7 --- Conclusion --- p.102Chapter 6. --- On Distributed g-Learning Network --- p.104Chapter 6.1 --- Introduction --- p.105Chapter 6.2 --- Distributed Q-Learniing Network --- p.108Chapter 6.2.1 --- Distributed Q-Learning Network --- p.109Chapter 6.2.2 --- Q-Learning Network Architecture --- p.111Chapter 6.3 --- Experimental Results --- p.114Chapter 6.3.1 --- Random Walk --- p.114Chapter 6.3.2 --- The Shortest Path Problem --- p.116Chapter 6.4 --- Discussion --- p.120Chapter 6.4.1 --- Related Work --- p.121Chapter 6.5 --- FPGAs Implementation --- p.122Chapter 6.5.1 --- Distributed Registering Approach --- p.123Chapter 6.5.2 --- Serial BRAM Storing Approach --- p.124Chapter 6.5.3 --- Comparison --- p.125Chapter 6.5.4 --- Discussion --- p.127Chapter 6.6 --- Conclusion --- p.128Chapter 7. --- Summary --- p.129Bibliography --- p.132AppendixChapter A. --- Simplified Floating-Point Arithmetic --- p.143Chapter B. --- "Logarithm, Exponential and Division Implementation" --- p.144Chapter B.1 --- Introduction --- p.144Chapter B.2 --- Approximation Scheme --- p.145Chapter B.2.1 --- Logarithm --- p.145Chapter B.2.2 --- Exponentiation --- p.147Chapter B.2.3 --- Division --- p.148Chapter C. --- Analog VLSI Implementation --- p.150Chapter C.1 --- Site Function --- p.150Chapter C.1.1 --- Multiplication Cell --- p.150Chapter C.2 --- The Unit Function --- p.153Chapter C.3 --- The Inference Network Computation --- p.154Chapter C.4 --- Layout --- p.157Chapter C.5 --- Fabrication --- p.159Chapter C.5.1 --- Testing and Characterization --- p.16

    Design of a High-Speed Architecture for Stabilization of Video Captured Under Non-Uniform Lighting Conditions

    Get PDF
    Video captured in shaky conditions may lead to vibrations. A robust algorithm to immobilize the video by compensating for the vibrations from physical settings of the camera is presented in this dissertation. A very high performance hardware architecture on Field Programmable Gate Array (FPGA) technology is also developed for the implementation of the stabilization system. Stabilization of video sequences captured under non-uniform lighting conditions begins with a nonlinear enhancement process. This improves the visibility of the scene captured from physical sensing devices which have limited dynamic range. This physical limitation causes the saturated region of the image to shadow out the rest of the scene. It is therefore desirable to bring back a more uniform scene which eliminates the shadows to a certain extent. Stabilization of video requires the estimation of global motion parameters. By obtaining reliable background motion, the video can be spatially transformed to the reference sequence thereby eliminating the unintended motion of the camera. A reflectance-illuminance model for video enhancement is used in this research work to improve the visibility and quality of the scene. With fast color space conversion, the computational complexity is reduced to a minimum. The basic video stabilization model is formulated and configured for hardware implementation. Such a model involves evaluation of reliable features for tracking, motion estimation, and affine transformation to map the display coordinates of a stabilized sequence. The multiplications, divisions and exponentiations are replaced by simple arithmetic and logic operations using improved log-domain computations in the hardware modules. On Xilinx\u27s Virtex II 2V8000-5 FPGA platform, the prototype system consumes 59% logic slices, 30% flip-flops, 34% lookup tables, 35% embedded RAMs and two ZBT frame buffers. The system is capable of rendering 180.9 million pixels per second (mpps) and consumes approximately 30.6 watts of power at 1.5 volts. With a 1024×1024 frame, the throughput is equivalent to 172 frames per second (fps). Future work will optimize the performance-resource trade-off to meet the specific needs of the applications. It further extends the model for extraction and tracking of moving objects as our model inherently encapsulates the attributes of spatial distortion and motion prediction to reduce complexity. With these parameters to narrow down the processing range, it is possible to achieve a minimum of 20 fps on desktop computers with Intel Core 2 Duo or Quad Core CPUs and 2GB DDR2 memory without a dedicated hardware

    ARITHMETIC LOGIC UNIT ARCHITECTURES WITH DYNAMICALLY DEFINED PRECISION

    Get PDF
    Modern central processing units (CPUs) employ arithmetic logic units (ALUs) that support statically defined precisions, often adhering to industry standards. Although CPU manufacturers highly optimize their ALUs, industry standard precisions embody accuracy and performance compromises for general purpose deployment. Hence, optimizing ALU precision holds great potential for improving speed and energy efficiency. Previous research on multiple precision ALUs focused on predefined, static precisions. Little previous work addressed ALU architectures with customized, dynamically defined precision. This dissertation presents approaches for developing dynamic precision ALU architectures for both fixed-point and floating-point to enable better performance, energy efficiency, and numeric accuracy. These new architectures enable dynamically defined precision, including support for vectorization. The new architectures also prevent performance and energy loss due to applying unnecessarily high precision on computations, which often happens with statically defined standard precisions. The new ALU architectures support different precisions through the use of configurable sub-blocks, with this dissertation including demonstration implementations for floating point adder, multiply, and fused multiply-add (FMA) circuits with 4-bit sub-blocks. For these circuits, the dynamic precision ALU speed is nearly the same as traditional ALU approaches, although the dynamic precision ALU is nearly twice as large

    Finite-Alphabet MMSE Equalization for All-Digital Massive MU-MIMO mmWave Communication

    Full text link
    We propose finite-alphabet equalization, a new paradigm that restricts the entries of the spatial equalization matrix to low-resolution numbers, enabling high-throughput, low-power, and low-cost hardware equalizers. To minimize the performance loss of this paradigm, we introduce FAME, short for finite-alphabet minimum mean-square error (MMSE) equalization, which is able to significantly outperform a naive quantization of the linear MMSE matrix. We develop efficient algorithms to approximately solve the NP-hard FAME problem and showcase that near-optimal performance can be achieved with equalization coefficients quantized to only 1-3 bits for massive multi-user multiple-input multiple-output (MU-MIMO) millimeter-wave (mmWave) systems. We provide very-large scale integration (VLSI) results that demonstrate a reduction in equalization power and area by at least a factor of 3.9x and 5.8x, respectively.Comment: Appeared in the IEEE Journal on Selected Areas in Communication

    Generic low power reconfigurable distributed arithmetic processor

    Get PDF
    Higher performance, lower cost, increasingly minimizing integrated circuit components, and higher packaging density of chips are ongoing goals of the microelectronic and computer industry. As these goals are being achieved, however, power consumption and flexibility are increasingly becoming bottlenecks that need to be addressed with the new technology in Very Large-Scale Integrated (VLSI) design. For modern systems, more energy is required to support the powerful computational capability which accords with the increasing requirements, and these requirements cause the change of standards not only in audio and video broadcasting but also in communication such as wireless connection and network protocols. Powerful flexibility and low consumption are repellent, but their combination in one system is the ultimate goal of designers. A generic domain-specific low-power reconfigurable processor for the distributed arithmetic algorithm is presented in this dissertation. This domain reconfigurable processor features high efficiency in terms of area, power and delay, which approaches the performance of an ASIC design, while retaining the flexibility of programmable platforms. The architecture not only supports typical distributed arithmetic algorithms which can be found in most still picture compression standards and video conferencing standards, but also offers implementation ability for other distributed arithmetic algorithms found in digital signal processing, telecommunication protocols and automatic control. In this processor, a simple reconfigurable low power control unit is implemented with good performance in area, power and timing. The generic characteristic of the architecture makes it applicable for any small and medium size finite state machines which can be used as control units to implement complex system behaviour and can be found in almost all engineering disciplines. Furthermore, to map target applications efficiently onto the proposed architecture, a new algorithm is introduced for searching for the best common sharing terms set and it keeps the area and power consumption of the implementation at low level. The software implementation of this algorithm is presented, which can be used not only for the proposed architecture in this dissertation but also for all the implementations with adder-based distributed arithmetic algorithms. In addition, some low power design techniques are applied in the architecture, such as unsymmetrical design style including unsymmetrical interconnection arranging, unsymmetrical PTBs selection and unsymmetrical mapping basic computing units. All these design techniques achieve extraordinary power consumption saving. It is believed that they can be extended to more low power designs and architectures. The processor presented in this dissertation can be used to implement complex, high performance distributed arithmetic algorithms for communication and image processing applications with low cost in area and power compared with the traditional methods

    Reconfigurable Architectures and Systems for IoT Applications

    Get PDF
    abstract: Internet of Things (IoT) has become a popular topic in industry over the recent years, which describes an ecosystem of internet-connected devices or things that enrich the everyday life by improving our productivity and efficiency. The primary components of the IoT ecosystem are hardware, software and services. While the software and services of IoT system focus on data collection and processing to make decisions, the underlying hardware is responsible for sensing the information, preprocess and transmit it to the servers. Since the IoT ecosystem is still in infancy, there is a great need for rapid prototyping platforms that would help accelerate the hardware design process. However, depending on the target IoT application, different sensors are required to sense the signals such as heart-rate, temperature, pressure, acceleration, etc., and there is a great need for reconfigurable platforms that can prototype different sensor interfacing circuits. This thesis primarily focuses on two important hardware aspects of an IoT system: (a) an FPAA based reconfigurable sensing front-end system and (b) an FPGA based reconfigurable processing system. To enable reconfiguration capability for any sensor type, Programmable ANalog Device Array (PANDA), a transistor-level analog reconfigurable platform is proposed. CAD tools required for implementation of front-end circuits on the platform are also developed. To demonstrate the capability of the platform on silicon, a small-scale array of 24×25 PANDA cells is fabricated in 65nm technology. Several analog circuit building blocks including amplifiers, bias circuits and filters are prototyped on the platform, which demonstrates the effectiveness of the platform for rapid prototyping IoT sensor interfaces. IoT systems typically use machine learning algorithms that run on the servers to process the data in order to make decisions. Recently, embedded processors are being used to preprocess the data at the energy-constrained sensor node or at IoT gateway, which saves considerable energy for transmission and bandwidth. Using conventional CPU based systems for implementing the machine learning algorithms is not energy-efficient. Hence an FPGA based hardware accelerator is proposed and an optimization methodology is developed to maximize throughput of any convolutional neural network (CNN) based machine learning algorithm on a resource-constrained FPGA.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    • …
    corecore