608 research outputs found

    Implementation of a real time Hough transform using FPGA technology

    Get PDF
    This thesis is concerned with the modelling, design and implementation of efficient architectures for performing the Hough Transform (HT) on mega-pixel resolution real-time images using Field Programmable Gate Array (FPGA) technology. Although the HT has been around for many years and a number of algorithms have been developed it still remains a significant bottleneck in many image processing applications. Even though, the basic idea of the HT is to locate curves in an image that can be parameterized: e.g. straight lines, polynomials or circles, in a suitable parameter space, the research presented in this thesis will focus only on location of straight lines on binary images. The HT algorithm uses an accumulator array (accumulator bins) to detect the existence of a straight line on an image. As the image needs to be binarized, a novel generic synchronization circuit for windowing operations was designed to perform edge detection. An edge detection method of special interest, the canny method, is used and the design and implementation of it in hardware is achieved in this thesis. As each image pixel can be implemented independently, parallel processing can be performed. However, the main disadvantage of the HT is the large storage and computational requirements. This thesis presents new and state-of-the-art hardware implementations for the minimization of the computational cost, using the Hybrid-Logarithmic Number System (Hybrid-LNS) for calculating the HT for fixed bit-width architectures. It is shown that using the Hybrid-LNS the computational cost is minimized, while the precision of the HT algorithm is maintained. Advances in FPGA technology now make it possible to implement functions as the HT in reconfigurable fabrics. Methods for storing large arrays on FPGA’s are presented, where data from a 1024 x 1024 pixel camera at a rate of up to 25 frames per second are processed

    Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing

    Get PDF
    How to implement quality computing with the limited power budget is the key factor to move very large scale integration (VLSI) chip design forward. This work introduces various techniques of low power VLSI design used for state of art computing. From the viewpoint of power supply, conventional in-chip voltage regulators based on analog blocks bring the large overhead of both power and area to computational chips. Motivated by this, a digital based switchable pin method to dynamically regulate power at low circuit cost has been proposed to make computing to be executed with a stable voltage supply. For one of the widely used and time consuming arithmetic units, multiplier, its operation in logarithmic domain shows an advantageous performance compared to that in binary domain considering computation latency, power and area. However, the introduced conversion error reduces the reliability of the following computation (e.g. multiplication and division.). In this work, a fast calibration method suppressing the conversion error and its VLSI implementation are proposed. The proposed logarithmic converter can be supplied by dc power to achieve fast conversion and clocked power to reduce the power dissipated during conversion. Going out of traditional computation methods and widely used static logic, neuron-like cell is also studied in this work. Using multiple input floating gate (MIFG) metal-oxide semiconductor field-effect transistor (MOSFET) based logic, a 32-bit, 16-operation arithmetic logic unit (ALU) with zipped decoding and a feedback loop is designed. The proposed ALU can reduce the switching power and has a strong driven-in capability due to coupling capacitors compared to static logic based ALU. Besides, recent neural computations bring serious challenges to digital VLSI implementation due to overload matrix multiplications and non-linear functions. An analog VLSI design which is compatible to external digital environment is proposed for the network of long short-term memory (LSTM). The entire analog based network computes much faster and has higher energy efficiency than the digital one

    On FPGA implementations for bioinformatics, neural prosthetics and reinforcement learning problems.

    Get PDF
    Mak Sui Tung Terrence.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 132-142).Abstracts in English and Chinese.Abstract --- p.iList of Tables --- p.ivList of Figures --- p.vAcknowledgements --- p.ixChapter 1. --- Introduction --- p.1Chapter 1.1 --- Bioinformatics --- p.1Chapter 1.2 --- Neural Prosthetics --- p.4Chapter 1.3 --- Learning in Uncertainty --- p.5Chapter 1.4 --- The Field Programmable Gate Array (FPGAs) --- p.7Chapter 1.5 --- Scope of the Thesis --- p.10Chapter 2. --- A Hybrid GA-DP Approach for Searching Equivalence Sets --- p.14Chapter 2.1 --- Introduction --- p.16Chapter 2.2 --- Equivalence Set Criterion --- p.18Chapter 2.3 --- Genetic Algorithm and Dynamic Programming --- p.19Chapter 2.3.1 --- Genetic Algorithm Formulation --- p.20Chapter 2.3.2 --- Bounded Mutation --- p.21Chapter 2.3.3 --- Conditioned Crossover --- p.22Chapter 2.3.4 --- Implementation --- p.22Chapter 2.4 --- FPGAs Implementation of GA-DP --- p.24Chapter 2.4.1 --- System Overview --- p.25Chapter 2.4.2 --- Parallel Computation for Transitive Closure --- p.26Chapter 2.4.3 --- Genetic Operation Realization --- p.28Chapter 2.5 --- Discussion --- p.30Chapter 2.6 --- Limitation and Future Work --- p.33Chapter 2.7 --- Conclusion --- p.34Chapter 3. --- An FPGA-based Architecture for Maximum-Likelihood Phylogeny Evaluation --- p.35Chapter 3.1 --- Introduction --- p.36Chapter 3.2 --- Maximum-Likelihood Model --- p.39Chapter 3.3 --- Hardware Mapping for Pruning Algorithm --- p.41Chapter 3.3.1 --- Related Works --- p.41Chapter 3.3.2 --- Number Representation --- p.42Chapter 3.3.3 --- Binary Tree Representation --- p.43Chapter 3.3.4 --- Binary Tree Traversal --- p.45Chapter 3.3.5 --- Maximum-Likelihood Evaluation Algorithm --- p.46Chapter 3.4 --- System Architecture --- p.49Chapter 3.4.1 --- Transition Probability Unit --- p.50Chapter 3.4.2 --- State-Parallel Computation Unit --- p.51Chapter 3.4.3 --- Error Computation --- p.54Chapter 3.5 --- Discussion --- p.56Chapter 3.5.1 --- Hardware Resource Consumption --- p.56Chapter 3.5.2 --- Delay Evaluation --- p.57Chapter 3.6 --- Conclusion --- p.59Chapter 4. --- Field Programmable Gate Array Implementation of Neuronal Ion Channel Dynamics --- p.61Chapter 4.1 --- Introduction --- p.62Chapter 4.2 --- Background --- p.63Chapter 4.2.1 --- Analog VLSI Model for Hebbian Synapse --- p.63Chapter 4.2.2 --- A Unifying Model of Bi-directional Synaptic Plasticity --- p.64Chapter 4.2.3 --- Non-NMDA Receptor Channel Regulation --- p.65Chapter 4.3 --- FPGAs Implementation --- p.65Chapter 4.3.1 --- FPGA Design Flow --- p.65Chapter 4.3.2 --- Digital Model of NMD A and AMPA receptors --- p.65Chapter 4.3.3 --- Synapse Modification --- p.67Chapter 4.4 --- Results --- p.68Chapter 4.4.1 --- Simulation Results --- p.68Chapter 4.5 --- Discussion --- p.70Chapter 4.6 --- Conclusion --- p.71Chapter 5. --- Continuous-Time and Discrete-Time Inference Networks for Distributed Dynamic Programming --- p.72Chapter 5.1 --- Introduction --- p.74Chapter 5.2 --- Background --- p.77Chapter 5.2.1 --- Markov decision process (MDPs) --- p.78Chapter 5.2.2 --- Learning in the MDPs --- p.80Chapter 5.2.3 --- Bellman Optimal Criterion --- p.80Chapter 5.2.4 --- Value Iteration --- p.81Chapter 5.3 --- A Computational Framework for Continuous-Time Inference Network --- p.82Chapter 5.3.1 --- Binary Relation Inference Network --- p.83Chapter 5.3.2 --- Binary Relation Inference Network for MDPs --- p.85Chapter 5.3.3 --- Continuous-Time Inference Network for MDPs --- p.87Chapter 5.4 --- Convergence Consideration --- p.88Chapter 5.5 --- Numerical Simulation --- p.90Chapter 5.5.1 --- Example 1: Random Walk --- p.90Chapter 5.5.2 --- Example 2: Random Walk on a Grid --- p.94Chapter 5.5.3 --- Example 3: Stochastic Shortest Path Problem --- p.97Chapter 5.5.4 --- Relationships Between λ and γ --- p.99Chapter 5.6 --- Discrete-Time Inference Network --- p.100Chapter 5.6.1 --- Results --- p.101Chapter 5.7 --- Conclusion --- p.102Chapter 6. --- On Distributed g-Learning Network --- p.104Chapter 6.1 --- Introduction --- p.105Chapter 6.2 --- Distributed Q-Learniing Network --- p.108Chapter 6.2.1 --- Distributed Q-Learning Network --- p.109Chapter 6.2.2 --- Q-Learning Network Architecture --- p.111Chapter 6.3 --- Experimental Results --- p.114Chapter 6.3.1 --- Random Walk --- p.114Chapter 6.3.2 --- The Shortest Path Problem --- p.116Chapter 6.4 --- Discussion --- p.120Chapter 6.4.1 --- Related Work --- p.121Chapter 6.5 --- FPGAs Implementation --- p.122Chapter 6.5.1 --- Distributed Registering Approach --- p.123Chapter 6.5.2 --- Serial BRAM Storing Approach --- p.124Chapter 6.5.3 --- Comparison --- p.125Chapter 6.5.4 --- Discussion --- p.127Chapter 6.6 --- Conclusion --- p.128Chapter 7. --- Summary --- p.129Bibliography --- p.132AppendixChapter A. --- Simplified Floating-Point Arithmetic --- p.143Chapter B. --- "Logarithm, Exponential and Division Implementation" --- p.144Chapter B.1 --- Introduction --- p.144Chapter B.2 --- Approximation Scheme --- p.145Chapter B.2.1 --- Logarithm --- p.145Chapter B.2.2 --- Exponentiation --- p.147Chapter B.2.3 --- Division --- p.148Chapter C. --- Analog VLSI Implementation --- p.150Chapter C.1 --- Site Function --- p.150Chapter C.1.1 --- Multiplication Cell --- p.150Chapter C.2 --- The Unit Function --- p.153Chapter C.3 --- The Inference Network Computation --- p.154Chapter C.4 --- Layout --- p.157Chapter C.5 --- Fabrication --- p.159Chapter C.5.1 --- Testing and Characterization --- p.16

    Comparison of logarithmic and floating-point number systems implemented on Xilinx Virtex-II field-programmable gate arrays

    Get PDF
    The aim of this thesis is to compare the implementation of parameterisable LNS (logarithmic number system) and floating-point high dynamic range number systems on FPGA. The Virtex/Virtex-II range of FPGAs from Xilinx, which are the most popular FPGA technology, are used to implement the designs. The study focuses on using the low level primitives of the technology in an efficient way and so initially the design issues in implementing fixed-point operators are considered. The four basic operations of addition, multiplication, division and square root are considered. Carry- free adders, ripple-carry adders, parallel multipliers and digit recurrence division and square root are discussed. The floating-point operators use the word format and exceptions as described by the IEEE std-754. A dual-path adder implementation is described in detail, as are floating-point multiplier, divider and square root components. Results and comparisons with other works are given. The efficient implementation of function evaluation methods is considered next. An overview of current FPGA methods is given and a new piecewise polynomial implementation using the Taylor series is presented and compared with other designs in the literature. In the next section the LNS word format, accuracy and exceptions are described and two new LNS addition/subtraction function approximations are described. The algorithms for performing multiplication, division and powering in the LNS domain are also described and are compared with other designs in the open literature. Parameterisable conversion algorithms to convert to/from the fixed-point domain from/to the LNS and floating-point domain are described and implementation results given. In the next chapter MATLAB bit-true software models are given that have the exact functionality as the hardware models. The interfaces of the models are given and a serial communication system to perform low speed system tests is described. A comparison of the LNS and floating-point number systems in terms of area and delay is given. Different functions implemented in LNS and floating-point arithmetic are also compared and conclusions are drawn. The results show that when the LNS is implemented with a 6-bit or less characteristic it is superior to floating-point. However, for larger characteristic lengths the floating-point system is more efficient due to the delay and exponential area increase of the LNS addition operator. The LNS is beneficial for larger characteristics than 6-bits only for specialist applications that require a high portion of division, multiplication, square root, powering operations and few additions

    Image Processing Using FPGAs

    Get PDF
    This book presents a selection of papers representing current research on using field programmable gate arrays (FPGAs) for realising image processing algorithms. These papers are reprints of papers selected for a Special Issue of the Journal of Imaging on image processing using FPGAs. A diverse range of topics is covered, including parallel soft processors, memory management, image filters, segmentation, clustering, image analysis, and image compression. Applications include traffic sign recognition for autonomous driving, cell detection for histopathology, and video compression. Collectively, they represent the current state-of-the-art on image processing using FPGAs

    Accuracy-Guaranteed Fixed-Point Optimization in Hardware Synthesis and Processor Customization

    Get PDF
    RÉSUMÉ De nos jours, le calcul avec des nombres fractionnaires est essentiel dans une vaste gamme d’applications de traitement de signal et d’image. Pour le calcul numérique, un nombre fractionnaire peut être représenté à l’aide de l’arithmétique en virgule fixe ou en virgule flottante. L’arithmétique en virgule fixe est largement considérée préférable à celle en virgule flottante pour les architectures matérielles dédiées en raison de sa plus faible complexité d’implémentation. Dans la mise en œuvre du matériel, la largeur de mot attribuée à différents signaux a un impact significatif sur des métriques telles que les ressources (transistors), la vitesse et la consommation d'énergie. L'optimisation de longueur de mot (WLO) en virgule fixe est un domaine de recherche bien connu qui vise à optimiser les chemins de données par l'ajustement des longueurs de mots attribuées aux signaux. Un nombre en virgule fixe est composé d’une partie entière et d’une partie fractionnaire. Il y a une limite inférieure au nombre de bits alloués à la partie entière, de façon à prévenir les débordements pour chaque signal. Cette limite dépend de la gamme de valeurs que peut prendre le signal. Le nombre de bits de la partie fractionnaire, quant à lui, détermine la taille de l'erreur de précision finie qui est introduite dans les calculs. Il existe un compromis entre la précision et l'efficacité du matériel dans la sélection du nombre de bits de la partie fractionnaire. Le processus d'attribution du nombre de bits de la partie fractionnaire comporte deux procédures importantes: la modélisation de l'erreur de quantification et la sélection de la taille de la partie fractionnaire. Les travaux existants sur la WLO ont porté sur des circuits spécialisés comme plate-forme cible. Dans cette thèse, nous introduisons de nouvelles méthodologies, techniques et algorithmes pour améliorer l’implémentation de calculs en virgule fixe dans des circuits et processeurs spécialisés. La thèse propose une approche améliorée de modélisation d’erreur, basée sur l'arithmétique affine, qui aborde certains problèmes des méthodes existantes et améliore leur précision. La thèse introduit également une technique d'accélération et deux algorithmes semi-analytiques pour la sélection de la largeur de la partie fractionnaire pour la conception de circuits spécialisés. Alors que le premier algorithme suit une stratégie de recherche progressive, le second utilise une méthode de recherche en forme d'arbre pour l'optimisation de la largeur fractionnaire. Les algorithmes offrent deux options de compromis entre la complexité de calcul et le coût résultant. Le premier algorithme a une complexité polynomiale et obtient des résultats comparables avec des approches heuristiques existantes. Le second algorithme a une complexité exponentielle, mais il donne des résultats quasi-optimaux par rapport à une recherche exhaustive. Cette thèse propose également une méthode pour combiner l'optimisation de la longueur des mots dans un contexte de conception de processeurs configurables. La largeur et la profondeur des blocs de registres et l'architecture des unités fonctionnelles sont les principaux objectifs ciblés par cette optimisation. Un nouvel algorithme d'optimisation a été développé pour trouver la meilleure combinaison de longueurs de mots et d'autres paramètres configurables dans la méthode proposée. Les exigences de précision, définies comme l'erreur pire cas, doivent être respectées par toute solution. Pour faciliter l'évaluation et la mise en œuvre des solutions retenues, un nouvel environnement de conception de processeur a également été développé. Cet environnement, qui est appelé PolyCuSP, supporte une large gamme de paramètres, y compris ceux qui sont nécessaires pour évaluer les solutions proposées par l'algorithme d'optimisation. L’environnement PolyCuSP soutient l’exploration rapide de l'espace de solution et la capacité de modéliser différents jeux d'instructions pour permettre des comparaisons efficaces.----------ABSTRACT Fixed-point arithmetic is broadly preferred to floating-point in hardware development due to the reduced hardware complexity of fixed-point circuits. In hardware implementation, the bitwidth allocated to the data elements has significant impact on efficiency metrics for the circuits including area usage, speed and power consumption. Fixed-point word-length optimization (WLO) is a well-known research area. It aims to optimize fixed-point computational circuits through the adjustment of the allocated bitwidths of their internal and output signals. A fixed-point number is composed of an integer part and a fractional part. There is a minimum number of bits for the integer part that guarantees overflow and underflow avoidance in each signal. This value depends on the range of values that the signal may take. The fractional word-length determines the amount of finite-precision error that is introduced in the computations. There is a trade-off between accuracy and hardware cost in fractional word-length selection. The process of allocating the fractional word-length requires two important procedures: finite-precision error modeling and fractional word-length selection. Existing works on WLO have focused on hardwired circuits as the target implementation platform. In this thesis, we introduce new methodologies, techniques and algorithms to improve the hardware realization of fixed-point computations in hardwired circuits and customizable processors. The thesis proposes an enhanced error modeling approach based on affine arithmetic that addresses some shortcomings of the existing methods and improves their accuracy. The thesis also introduces an acceleration technique and two semi-analytical fractional bitwidth selection algorithms for WLO in hardwired circuit design. While the first algorithm follows a progressive search strategy, the second one uses a tree-shaped search method for fractional width optimization. The algorithms offer two different time-complexity/cost efficiency trade-off options. The first algorithm has polynomial complexity and achieves comparable results with existing heuristic approaches. The second algorithm has exponential complexity but achieves near-optimal results compared to an exhaustive search. The thesis further proposes a method to combine word-length optimization with application-specific processor customization. The supported datatype word-length, the size of register-files and the architecture of the functional units are the main target objectives to be optimized. A new optimization algorithm is developed to find the best combination of word-length and other customizable parameters in the proposed method. Accuracy requirements, defined as the worst-case error bound, are the key consideration that must be met by any solution. To facilitate evaluation and implementation of the selected solutions, a new processor design environment was developed. This environment, which is called PolyCuSP, supports necessary customization flexibility to realize and evaluate the solutions given by the optimization algorithm. PolyCuSP supports rapid design space exploration and capability to model different instruction-set architectures to enable effective compari

    Design of a High-Speed Architecture for Stabilization of Video Captured Under Non-Uniform Lighting Conditions

    Get PDF
    Video captured in shaky conditions may lead to vibrations. A robust algorithm to immobilize the video by compensating for the vibrations from physical settings of the camera is presented in this dissertation. A very high performance hardware architecture on Field Programmable Gate Array (FPGA) technology is also developed for the implementation of the stabilization system. Stabilization of video sequences captured under non-uniform lighting conditions begins with a nonlinear enhancement process. This improves the visibility of the scene captured from physical sensing devices which have limited dynamic range. This physical limitation causes the saturated region of the image to shadow out the rest of the scene. It is therefore desirable to bring back a more uniform scene which eliminates the shadows to a certain extent. Stabilization of video requires the estimation of global motion parameters. By obtaining reliable background motion, the video can be spatially transformed to the reference sequence thereby eliminating the unintended motion of the camera. A reflectance-illuminance model for video enhancement is used in this research work to improve the visibility and quality of the scene. With fast color space conversion, the computational complexity is reduced to a minimum. The basic video stabilization model is formulated and configured for hardware implementation. Such a model involves evaluation of reliable features for tracking, motion estimation, and affine transformation to map the display coordinates of a stabilized sequence. The multiplications, divisions and exponentiations are replaced by simple arithmetic and logic operations using improved log-domain computations in the hardware modules. On Xilinx\u27s Virtex II 2V8000-5 FPGA platform, the prototype system consumes 59% logic slices, 30% flip-flops, 34% lookup tables, 35% embedded RAMs and two ZBT frame buffers. The system is capable of rendering 180.9 million pixels per second (mpps) and consumes approximately 30.6 watts of power at 1.5 volts. With a 1024Ă—1024 frame, the throughput is equivalent to 172 frames per second (fps). Future work will optimize the performance-resource trade-off to meet the specific needs of the applications. It further extends the model for extraction and tracking of moving objects as our model inherently encapsulates the attributes of spatial distortion and motion prediction to reduce complexity. With these parameters to narrow down the processing range, it is possible to achieve a minimum of 20 fps on desktop computers with Intel Core 2 Duo or Quad Core CPUs and 2GB DDR2 memory without a dedicated hardware

    Topics in Adaptive Optics

    Get PDF
    Advances in adaptive optics technology and applications move forward at a rapid pace. The basic idea of wavefront compensation in real-time has been around since the mid 1970s. The first widely used application of adaptive optics was for compensating atmospheric turbulence effects in astronomical imaging and laser beam propagation. While some topics have been researched and reported for years, even decades, new applications and advances in the supporting technologies occur almost daily. This book brings together 11 original chapters related to adaptive optics, written by an international group of invited authors. Topics include atmospheric turbulence characterization, astronomy with large telescopes, image post-processing, high power laser distortion compensation, adaptive optics and the human eye, wavefront sensors, and deformable mirrors
    • …
    corecore