182 research outputs found

    Hardware implementation of naive bayes classifier for malware detection

    Get PDF
    Naïve bayes classifier is a probabilistic supervised machine learning algorithm, that can be launched on most general-purpose devices to solve wide range of classification problems. However, when it comes to real time applications, the general-purpose devices are limited in term of their computational throughput, thus this algorithm couldn’t be used for that purpose. The aim of this project is to accelerate this algorithm in hardware environment to improve its performance by exploring its hidden concurrency and map it into parallel hardware as an optimized IP package, suitable for FPGA-SoC applications. Thus, it could be used as a middle box system for real time malware detection. In order for the proposed hardware to meet the requirements of this research, it should be able to handle both training, and inference part in hardware, and also should be able to receive a flow of 20 features, each of 32-bitsize, organized in 4-gram format. To meet these requirements, an enhanced version of the algorithm was developed and tested in Cprogramming. Then an equivalent design with a 5-stages pipelined architecture, and single instruction multiple data capabilities, was built in hardware to address the case. At the end, the proposed hardware found to be 65 times faster in term of its computational throughput compared to an existing design, and that with keeping the accuracy level as high as 94%, under the conditions of experiment carried

    Optimising algorithm and hardware for deep neural networks on FPGAs

    Get PDF
    This thesis proposes novel algorithm and hardware optimisation approaches to accelerate Deep Neural Networks (DNNs), including both Convolutional Neural Networks (CNNs) and Bayesian Neural Networks (BayesNNs). The first contribution of this thesis is to propose an adaptable and reconfigurable hardware design to accelerate CNNs. By analysing the computational patterns of different CNNs, a unified hardware architecture is proposed for both 2-Dimension and 3-Dimension CNNs. The accelerator is also designed with runtime adaptability, which adopts different parallelism strategies for different convolutional layers at runtime. The second contribution of this thesis is to propose a novel neural network architecture and hardware design co-optimisation approach, which improves the performance of CNNs at both algorithm and hardware levels. Our proposed three-phase co-design framework decouples network training from design space exploration, which significantly reduces the time-cost of the co-optimisation process. The third contribution of this thesis is to propose an algorithmic and hardware co-optimisation framework for accelerating BayesNNs. At the algorithmic level, three categories of structured sparsity are explored to reduce the computational complexity of BayesNNs. At the hardware level, we propose a novel hardware architecture with the aim of exploiting the structured sparsity for BayesNNs. Both algorithmic and hardware optimisations are jointly applied to push the performance limit.Open Acces

    Image Processing Using FPGAs

    Get PDF
    This book presents a selection of papers representing current research on using field programmable gate arrays (FPGAs) for realising image processing algorithms. These papers are reprints of papers selected for a Special Issue of the Journal of Imaging on image processing using FPGAs. A diverse range of topics is covered, including parallel soft processors, memory management, image filters, segmentation, clustering, image analysis, and image compression. Applications include traffic sign recognition for autonomous driving, cell detection for histopathology, and video compression. Collectively, they represent the current state-of-the-art on image processing using FPGAs

    On FPGA implementations for bioinformatics, neural prosthetics and reinforcement learning problems.

    Get PDF
    Mak Sui Tung Terrence.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 132-142).Abstracts in English and Chinese.Abstract --- p.iList of Tables --- p.ivList of Figures --- p.vAcknowledgements --- p.ixChapter 1. --- Introduction --- p.1Chapter 1.1 --- Bioinformatics --- p.1Chapter 1.2 --- Neural Prosthetics --- p.4Chapter 1.3 --- Learning in Uncertainty --- p.5Chapter 1.4 --- The Field Programmable Gate Array (FPGAs) --- p.7Chapter 1.5 --- Scope of the Thesis --- p.10Chapter 2. --- A Hybrid GA-DP Approach for Searching Equivalence Sets --- p.14Chapter 2.1 --- Introduction --- p.16Chapter 2.2 --- Equivalence Set Criterion --- p.18Chapter 2.3 --- Genetic Algorithm and Dynamic Programming --- p.19Chapter 2.3.1 --- Genetic Algorithm Formulation --- p.20Chapter 2.3.2 --- Bounded Mutation --- p.21Chapter 2.3.3 --- Conditioned Crossover --- p.22Chapter 2.3.4 --- Implementation --- p.22Chapter 2.4 --- FPGAs Implementation of GA-DP --- p.24Chapter 2.4.1 --- System Overview --- p.25Chapter 2.4.2 --- Parallel Computation for Transitive Closure --- p.26Chapter 2.4.3 --- Genetic Operation Realization --- p.28Chapter 2.5 --- Discussion --- p.30Chapter 2.6 --- Limitation and Future Work --- p.33Chapter 2.7 --- Conclusion --- p.34Chapter 3. --- An FPGA-based Architecture for Maximum-Likelihood Phylogeny Evaluation --- p.35Chapter 3.1 --- Introduction --- p.36Chapter 3.2 --- Maximum-Likelihood Model --- p.39Chapter 3.3 --- Hardware Mapping for Pruning Algorithm --- p.41Chapter 3.3.1 --- Related Works --- p.41Chapter 3.3.2 --- Number Representation --- p.42Chapter 3.3.3 --- Binary Tree Representation --- p.43Chapter 3.3.4 --- Binary Tree Traversal --- p.45Chapter 3.3.5 --- Maximum-Likelihood Evaluation Algorithm --- p.46Chapter 3.4 --- System Architecture --- p.49Chapter 3.4.1 --- Transition Probability Unit --- p.50Chapter 3.4.2 --- State-Parallel Computation Unit --- p.51Chapter 3.4.3 --- Error Computation --- p.54Chapter 3.5 --- Discussion --- p.56Chapter 3.5.1 --- Hardware Resource Consumption --- p.56Chapter 3.5.2 --- Delay Evaluation --- p.57Chapter 3.6 --- Conclusion --- p.59Chapter 4. --- Field Programmable Gate Array Implementation of Neuronal Ion Channel Dynamics --- p.61Chapter 4.1 --- Introduction --- p.62Chapter 4.2 --- Background --- p.63Chapter 4.2.1 --- Analog VLSI Model for Hebbian Synapse --- p.63Chapter 4.2.2 --- A Unifying Model of Bi-directional Synaptic Plasticity --- p.64Chapter 4.2.3 --- Non-NMDA Receptor Channel Regulation --- p.65Chapter 4.3 --- FPGAs Implementation --- p.65Chapter 4.3.1 --- FPGA Design Flow --- p.65Chapter 4.3.2 --- Digital Model of NMD A and AMPA receptors --- p.65Chapter 4.3.3 --- Synapse Modification --- p.67Chapter 4.4 --- Results --- p.68Chapter 4.4.1 --- Simulation Results --- p.68Chapter 4.5 --- Discussion --- p.70Chapter 4.6 --- Conclusion --- p.71Chapter 5. --- Continuous-Time and Discrete-Time Inference Networks for Distributed Dynamic Programming --- p.72Chapter 5.1 --- Introduction --- p.74Chapter 5.2 --- Background --- p.77Chapter 5.2.1 --- Markov decision process (MDPs) --- p.78Chapter 5.2.2 --- Learning in the MDPs --- p.80Chapter 5.2.3 --- Bellman Optimal Criterion --- p.80Chapter 5.2.4 --- Value Iteration --- p.81Chapter 5.3 --- A Computational Framework for Continuous-Time Inference Network --- p.82Chapter 5.3.1 --- Binary Relation Inference Network --- p.83Chapter 5.3.2 --- Binary Relation Inference Network for MDPs --- p.85Chapter 5.3.3 --- Continuous-Time Inference Network for MDPs --- p.87Chapter 5.4 --- Convergence Consideration --- p.88Chapter 5.5 --- Numerical Simulation --- p.90Chapter 5.5.1 --- Example 1: Random Walk --- p.90Chapter 5.5.2 --- Example 2: Random Walk on a Grid --- p.94Chapter 5.5.3 --- Example 3: Stochastic Shortest Path Problem --- p.97Chapter 5.5.4 --- Relationships Between λ and γ --- p.99Chapter 5.6 --- Discrete-Time Inference Network --- p.100Chapter 5.6.1 --- Results --- p.101Chapter 5.7 --- Conclusion --- p.102Chapter 6. --- On Distributed g-Learning Network --- p.104Chapter 6.1 --- Introduction --- p.105Chapter 6.2 --- Distributed Q-Learniing Network --- p.108Chapter 6.2.1 --- Distributed Q-Learning Network --- p.109Chapter 6.2.2 --- Q-Learning Network Architecture --- p.111Chapter 6.3 --- Experimental Results --- p.114Chapter 6.3.1 --- Random Walk --- p.114Chapter 6.3.2 --- The Shortest Path Problem --- p.116Chapter 6.4 --- Discussion --- p.120Chapter 6.4.1 --- Related Work --- p.121Chapter 6.5 --- FPGAs Implementation --- p.122Chapter 6.5.1 --- Distributed Registering Approach --- p.123Chapter 6.5.2 --- Serial BRAM Storing Approach --- p.124Chapter 6.5.3 --- Comparison --- p.125Chapter 6.5.4 --- Discussion --- p.127Chapter 6.6 --- Conclusion --- p.128Chapter 7. --- Summary --- p.129Bibliography --- p.132AppendixChapter A. --- Simplified Floating-Point Arithmetic --- p.143Chapter B. --- "Logarithm, Exponential and Division Implementation" --- p.144Chapter B.1 --- Introduction --- p.144Chapter B.2 --- Approximation Scheme --- p.145Chapter B.2.1 --- Logarithm --- p.145Chapter B.2.2 --- Exponentiation --- p.147Chapter B.2.3 --- Division --- p.148Chapter C. --- Analog VLSI Implementation --- p.150Chapter C.1 --- Site Function --- p.150Chapter C.1.1 --- Multiplication Cell --- p.150Chapter C.2 --- The Unit Function --- p.153Chapter C.3 --- The Inference Network Computation --- p.154Chapter C.4 --- Layout --- p.157Chapter C.5 --- Fabrication --- p.159Chapter C.5.1 --- Testing and Characterization --- p.16

    Application-Specific Number Representation

    No full text
    Reconfigurable devices, such as Field Programmable Gate Arrays (FPGAs), enable application- specific number representations. Well-known number formats include fixed-point, floating- point, logarithmic number system (LNS), and residue number system (RNS). Such different number representations lead to different arithmetic designs and error behaviours, thus produc- ing implementations with different performance, accuracy, and cost. To investigate the design options in number representations, the first part of this thesis presents a platform that enables automated exploration of the number representation design space. The second part of the thesis shows case studies that optimise the designs for area, latency or throughput from the perspective of number representations. Automated design space exploration in the first part addresses the following two major issues: ² Automation requires arithmetic unit generation. This thesis provides optimised arithmetic library generators for logarithmic and residue arithmetic units, which support a wide range of bit widths and achieve significant improvement over previous designs. ² Generation of arithmetic units requires specifying the bit widths for each variable. This thesis describes an automatic bit-width optimisation tool called R-Tool, which combines dynamic and static analysis methods, and supports different number systems (fixed-point, floating-point, and LNS numbers). Putting it all together, the second part explores the effects of application-specific number representation on practical benchmarks, such as radiative Monte Carlo simulation, and seismic imaging computations. Experimental results show that customising the number representations brings benefits to hardware implementations: by selecting a more appropriate number format, we can reduce the area cost by up to 73.5% and improve the throughput by 14.2% to 34.1%; by performing the bit-width optimisation, we can further reduce the area cost by 9.7% to 17.3%. On the performance side, hardware implementations with customised number formats achieve 5 to potentially over 40 times speedup over software implementations

    Ensemble learning with discrete classifiers on small devices

    Get PDF
    Machine learning has become an integral part of everyday life ranging from applications in AI-powered search queries to (partial) autonomous driving. Many of the advances in machine learning and its application have been possible due to increases in computation power, i.e., by reducing manufacturing sizes while maintaining or even increasing energy consumption. However, 2-3 nm manufacturing is within reach, making further miniaturization increasingly difficult while thermal design power limits are simultaneously reached, rendering entire parts of the chip useless for certain computational loads. In this thesis, we investigate discrete classifier ensembles as a resource-efficient alternative that can be deployed to small devices that only require small amounts of energy. Discrete classifiers are classifiers that can be applied -- and oftentimes also trained -- without the need for costly floating-point operations. Hence, they are ideally suited for deployment to small devices with limited resources. The disadvantage of discrete classifiers is that their predictive performance often lacks behind their floating-point siblings. Here, the combination of multiple discrete classifiers into an ensemble can help to improve the predictive performance while still having a manageable resource consumption. This thesis studies discrete classifier ensembles from a theoretical point of view, an algorithmic point of view, and a practical point of view. In the theoretical investigation, the bias-variance decomposition and the double-descent phenomenon are examined. The bias-variance decomposition of the mean-squared error is re-visited and generalized to an arbitrary twice-differentiable loss function, which serves as a guiding tool throughout the thesis. Similarly, the double-descent phenomenon is -- for the first time -- studied comprehensively in the context of tree ensembles and specifically random forests. Contrary to established literature, the experiments in this thesis indicate that there is no double-descent in random forests. While the training of ensembles is well-studied in literature, the deployment to small devices is often neglected. Additionally, the training of ensembles on small devices has not been considered much so far. Hence, the algorithmic part of this thesis focuses on the deployment of discrete classifiers and the training of ensembles on small devices. First, a novel combination of ensemble pruning (i.e., removing classifiers from the ensemble) and ensemble refinement (i.e., re-training of classifiers in the ensemble) is presented, which uses a novel proximal gradient descent algorithm to minimize a combined loss function. The resulting algorithm removes unnecessary classifiers from an already trained ensemble while improving the performance of the remaining classifiers at the same time. Second, this algorithm is extended to the more challenging setting of online learning in which the algorithm receives training examples one by one. The resulting shrub ensembles algorithm allows the training of ensembles in an online fashion while maintaining a strictly bounded memory consumption. It outperforms existing state-of-the-art algorithms under resource constraints and offers competitive performance in the general case. Last, this thesis studies the deployment of decision tree ensembles to small devices by optimizing their memory layout. The key insight here is that decision trees have a probabilistic inference time because different observations can take different paths from the root to a leaf. By estimating the probability of visiting a particular node in the tree, one can place it favorably in the memory to maximize the caching behavior and, thus, increase its performance without changing the model. Last, several real-world applications of tree ensembles and Binarized Neural Networks are presented

    Hardware acceleration of photon mapping

    Get PDF
    PhD ThesisThe quest for realism in computer-generated graphics has yielded a range of algorithmic techniques, the most advanced of which are capable of rendering images at close to photorealistic quality. Due to the realism available, it is now commonplace that computer graphics are used in the creation of movie sequences, architectural renderings, medical imagery and product visualisations. This work concentrates on the photon mapping algorithm [1, 2], a physically based global illumination rendering algorithm. Photon mapping excels in producing highly realistic, physically accurate images. A drawback to photon mapping however is its rendering times, which can be significantly longer than other, albeit less realistic, algorithms. Not surprisingly, this increase in execution time is associated with a high computational cost. This computation is usually performed using the general purpose central processing unit (CPU) of a personal computer (PC), with the algorithm implemented as a software routine. Other options available for processing these algorithms include desktop PC graphics processing units (GPUs) and custom designed acceleration hardware devices. GPUs tend to be efficient when dealing with less realistic rendering solutions such as rasterisation, however with their recent drive towards increased programmability they can also be used to process more realistic algorithms. A drawback to the use of GPUs is that these algorithms often have to be reworked to make optimal use of the limited resources available. There are very few custom hardware devices available for acceleration of the photon mapping algorithm. Ray-tracing is the predecessor to photon mapping, and although not capable of producing the same physical accuracy and therefore realism, there are similarities between the algorithms. There have been several hardware prototypes, and at least one commercial offering, created with the goal of accelerating ray-trace rendering [3]. However, properties making many of these proposals suitable for the acceleration of ray-tracing are not shared by photon mapping. There are even fewer proposals for acceleration of the additional functions found only in photon mapping. All of these approaches to algorithm acceleration offer limited scalability. GPUs are inherently difficult to scale, while many of the custom hardware devices available thus far make use of large processing elements and complex acceleration data structures. In this work we make use of three novel approaches in the design of highly scalable specialised hardware structures for the acceleration of the photon mapping algorithm. Increased scalability is gained through: • The use of a brute-force approach in place of the commonly used smart approach, thus eliminating much data pre-processing, complex data structures and large processing units often required. • The use of Logarithmic Number System (LNS) arithmetic computation, which facilitates a reduction in processing area requirement. • A novel redesign of the photon inclusion test, used within the photon search method of the photon mapping algorithm. This allows an intelligent memory structure to be used for the search. The design uses two hardware structures, both of which accelerate one core rendering function. Renderings produced using field programmable gate array (FPGA) based prototypes are presented, along with details of 90nm synthesised versions of the designs which show that close to an orderof- magnitude speedup over a software implementation is possible. Due to the scalable nature of the design, it is likely that any advantage can be maintained in the face of improving processor speeds. Significantly, due to the brute-force approach adopted, it is possible to eliminate an often-used software acceleration method. This means that the device can interface almost directly to a frontend modelling package, minimising much of the pre-processing required by most other proposals

    PC-grade parallel processing and hardware acceleration for large-scale data analysis

    Get PDF
    Arguably, modern graphics processing units (GPU) are the first commodity, and desktop parallel processor. Although GPU programming was originated from the interactive rendering in graphical applications such as computer games, researchers in the field of general purpose computation on GPU (GPGPU) are showing that the power, ubiquity and low cost of GPUs makes them an ideal alternative platform for high-performance computing. This has resulted in the extensive exploration in using the GPU to accelerate general-purpose computations in many engineering and mathematical domains outside of graphics. However, limited to the development complexity caused by the graphics-oriented concepts and development tools for GPU-programming, GPGPU has mainly been discussed in the academic domain so far and has not yet fully fulfilled its promises in the real world. This thesis aims at exploiting GPGPU in the practical engineering domain and presented a novel contribution to GPGPU-driven linear time invariant (LTI) systems that are employed by the signal processing techniques in stylus-based or optical-based surface metrology and data processing. The core contributions that have been achieved in this project can be summarized as follow. Firstly, a thorough survey of the state-of-the-art of GPGPU applications and their development approaches has been carried out in this thesis. In addition, the category of parallel architecture pattern that the GPGPU belongs to has been specified, which formed the foundation of the GPGPU programming framework design in the thesis. Following this specification, a GPGPU programming framework is deduced as a general guideline to the various GPGPU programming models that are applied to a large diversity of algorithms in scientific computing and engineering applications. Considering the evolution of GPU’s hardware architecture, the proposed frameworks cover through the transition of graphics-originated concepts for GPGPU programming based on legacy GPUs and the abstraction of stream processing pattern represented by the compute unified device architecture (CUDA) in which GPU is considered as not only a graphics device but a streaming coprocessor of CPU. Secondly, the proposed GPGPU programming framework are applied to the practical engineering applications, namely, the surface metrological data processing and image processing, to generate the programming models that aim to carry out parallel computing for the corresponding algorithms. The acceleration performance of these models are evaluated in terms of the speed-up factor and the data accuracy, which enabled the generation of quantifiable benchmarks for evaluating consumer-grade parallel processors. It shows that the GPGPU applications outperform the CPU solutions by up to 20 times without significant loss of data accuracy and any noticeable increase in source code complexity, which further validates the effectiveness of the proposed GPGPU general programming framework. Thirdly, this thesis devised methods for carrying out result visualization directly on GPU by storing processed data in local GPU memory through making use of GPU’s rendering device features to achieve realtime interactions. The algorithms employed in this thesis included various filtering techniques, discrete wavelet transform, and the fast Fourier Transform which cover the common operations implemented in most LTI systems in spatial and frequency domains. Considering the employed GPUs’ hardware designs, especially the structure of the rendering pipelines, and the characteristics of the algorithms, the series of proposed GPGPU programming models have proven its feasibility, practicality, and robustness in real engineering applications. The developed GPGPU programming framework as well as the programming models are anticipated to be adaptable for future consumer-level computing devices and other computational demanding applications. In addition, it is envisaged that the devised principles and methods in the framework design are likely to have significant benefits outside the sphere of surface metrology.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Control Theory in Engineering

    Get PDF
    The subject matter of this book ranges from new control design methods to control theory applications in electrical and mechanical engineering and computers. The book covers certain aspects of control theory, including new methodologies, techniques, and applications. It promotes control theory in practical applications of these engineering domains and shows the way to disseminate researchers’ contributions in the field. This project presents applications that improve the properties and performance of control systems in analysis and design using a higher technical level of scientific attainment. The authors have included worked examples and case studies resulting from their research in the field. Readers will benefit from new solutions and answers to questions related to the emerging realm of control theory in engineering applications and its implementation
    • …
    corecore