593 research outputs found

    A new adaptation of particle swarm optimization applied to modern FPGA placement

    No full text
    This work presents a new adaptation of the discrete particle swarm optimization method applied to the FPGA placement problem, a crucial and time-consuming step in the FPGA synthesis flow. We evaluate the performance of the new optimizer against the existing version by embedding them into a publicly available FPGA placer Liquid to replace the simulated annealing-based optimizer used for the hard block optimization. The benchmark testing using Titan23 circuits shows the runtime efficiency of the new optimizer with comparable post-routed results as those of Liquid using simulated annealing

    A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA based Partial Runtime Reconfigurable Systems

    Get PDF
    This paper has been studied an important issue of energy-efficient scheduling on multi-FPGA systems. The main challenges are integral allocation, reconfiguration overhead and exclusiveness and energy minimization with deadline constraint. To tackle these challenges, based on the theory of dynamic programming, we have designed and implemented an energy-efficient scheduling on multi-FPGA systems. Differently, we have presented a MLPF algorithm for task placement on FPGAs. Finally, the experimental results have demonstrated that the proposed algorithm can successfully accommodate all tasks without violation of the deadline constraint. Additionally, it gains higher energy reduction 13.3% and 26.3% than that of Particle Swarm Optimization and fully balanced algorithm, respectively

    Cooperative Models of Particle Swarm Optimizers

    Get PDF
    Particle Swarm Optimization (PSO) is one of the most effFective optimization tools, which emerged in the last decade. Although, the original aim was to simulate the behavior of a group of birds or a school of fish looking for food, it was quickly realized that it could be applied in optimization problems. Different directions have been taken to analyze the PSO behavior as well as improving its performance. One approach is the introduction of the concept of cooperation. This thesis focuses on studying this concept in PSO by investigating the different design decisions that influence the cooperative PSO models' performance and introducing new approaches for information exchange. Firstly, a comprehensive survey of all the cooperative PSO models proposed in the literature is compiled and a definition of what is meant by a cooperative PSO model is introduced. A taxonomy for classifying the different surveyed cooperative PSO models is given. This taxonomy classifies the cooperative models based on two different aspects: the approach the model uses for decomposing the problem search space and the method used for placing the particles into the different cooperating swarms. The taxonomy helps in gathering all the proposed models under one roof and understanding the similarities and differences between these models. Secondly, a number of parameters that control the performance of cooperative PSO models are identified. These parameters give answers to the four questions: Which information to share? When to share it? Whom to share it with? and What to do with it? A complete empirical study is conducted on one of the cooperative PSO models in order to understand how the performance changes under the influence of these parameters. Thirdly, a new heterogeneous cooperative PSO model is proposed, which is based on the exchange of probability models rather than the classical migration of particles. The model uses two swarms that combine the ideas of PSO and Estimation of Distribution Algorithms (EDAs) and is considered heterogeneous since the cooperating swarms use different approaches to sample the search space. The model is tested using different PSO models to ensure that the performance is robust against changing the underlying population topology. The experiments show that the model is able to produce better results than its components in many cases. The model also proves to be highly competitive when compared to a number of state-of-the-art cooperative PSO algorithms. Finally, two different versions of the PSO algorithm are applied in the FPGA placement problem. One version is applied entirely in the discrete domain, which is the first attempt to solve this problem in this domain using a discrete PSO (DPSO). Another version is implemented in the continuous domain. The PSO algorithms are applied to several well-known FPGA benchmark problems with increasing dimensionality. The results are compared to those obtained by the academic Versatile Place and Route (VPR) placement tool, which is based on Simulated Annealing (SA). The results show that these methods are competitive for small and medium-sized problems. For higher-sized problems, the methods provide very close results. The work also proposes the use of different cooperative PSO approaches using the two versions and their performances are compared to the single swarm performance

    Automated optimization of reconfigurable designs

    Get PDF
    Currently, the optimization of reconfigurable design parameters is typically done manually and often involves substantial amount effort. The main focus of this thesis is to reduce this effort. The designer can focus on the implementation and design correctness, leaving the tools to carry out optimization. To address this, this thesis makes three main contributions. First, we present initial investigation of reconfigurable design optimization with the Machine Learning Optimizer (MLO) algorithm. The algorithm is based on surrogate model technology and particle swarm optimization. By using surrogate models the long hardware generation time is mitigated and automatic optimization is possible. For the first time, to the best of our knowledge, we show how those models can both predict when hardware generation will fail and how well will the design perform. Second, we introduce a new algorithm called Automatic Reconfigurable Design Efficient Global Optimization (ARDEGO), which is based on the Efficient Global Optimization (EGO) algorithm. Compared to MLO, it supports parallelism and uses a simpler optimization loop. As the ARDEGO algorithm uses multiple optimization compute nodes, its optimization speed is greatly improved relative to MLO. Hardware generation time is random in nature, two similar configurations can take vastly different amount of time to generate making parallelization complicated. The novelty is efficient use of the optimization compute nodes achieved through extension of the asynchronous parallel EGO algorithm to constrained problems. Third, we show how results of design synthesis and benchmarking can be reused when a design is ported to a different platform or when its code is revised. This is achieved through the new Auto-Transfer algorithm. A methodology to make the best use of available synthesis and benchmarking results is a novel contribution to design automation of reconfigurable systems.Open Acces

    FPGA implementation of metaheuristic optimization algorithm

    Get PDF
    Metaheuristic algorithms are gaining popularity amongst researchers due to their ability to solve nonlinear optimization problems as well as the ability to be adapted to solve a variety of problems. There is a surge of novel metaheuristics proposed recently, however it is uncertain whether they are suitable for FPGA implementation. In addition, there exists a variety of design methodologies to implement metaheuristics upon FPGA which may improve the performance of the implementation. The project begins by researching and identifying metaheuristics which are suitable for FPGA implementation. The selected metaheuristic was the Simulated Kalman Filter (SKF) which proposed an algorithm that was low in complexity and used a small number of steps. Then the Discrete SKF was adapted from the original metaheuristic by rounding all floating-point values to integers as well as setting a fixed Kalman gain of 0.5. The Discrete SKF was then modeled using behavioral modeling to produce the Binary SKF which was then implemented onto FPGA. The design was made modular by producing separate modules that managed different parts of the metaheuristic and Parallel-In-Parallel-Out configuration of ports was also implemented. The Discrete SKF was then simulated on MATLAB meanwhile the Binary SKF was implemented onto FPGA and their performance were measured based on chip utilization, processing speed, and accuracy of results. The Binary SKF produced speed increment of up to 69 times faster than the Discrete SKF simulation

    Bioinspired Computing: Swarm Intelligence

    Get PDF

    Real-Time Electrocardiogram (ECG) Signal Analysis and Heart Rate Determination in FPGA Platform

    Get PDF
    Heart disease is one of the leading cause for death of people globally. According to a recent study by the Indian Council of Medical Research (ICMR), about 25 percent of deaths in the age group of 25- 69 years occur because of heart diseases.Electrocardiogram (ECG) is one of the primary tool for the treatment of heart disease. ECG is an important biological signal that reects the electrical activities of the heart. A typical ECG signal consists of mainly ve components namely P,Q, R, S and T wave. Amplitude and morphology of each component contains numerous medical information. The automated detection and delineation of each component in ECG signal is a challenging task in Bio-medical signal processing community. In this research work, a four stage method based on Shannon energy envelope has been proposed in order to detect QRS complex in ECG signal. Peak detection of the proposed algorithm is amplitude threshold free. To evaluate the performance eciency of the proposed method standard MIT-BIH arrhythmia ECG database has been used and get an average accuracy of 99.84 %, Sensitivity 99.95% and Positive Predictivity value 99.88 %. To detect and delineate P and T waves, an algorithm based on Extended Kalman Filter (EKF) with PSO has been proposed. For performance examination, standard QT ECG database has been used. The proposed algorithm yields an average Sensitivity of 99.61 % and Positive Predictivity of 99.00 % for the ECG signal of QT database. A long term automatic heart rate monitoring system is very much essential for standard supervision of a critical stage patient. This work also includes a feld programmable gate array (FPGA) implementation of a system that calculate the heart rate from Electrocardiogram (ECG) signal

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Influence Distribution Training Data on Performance Supervised Machine Learning Algorithms

    Get PDF
    Almost all fields of life need Banknote. Even particular fields of life require banknotes in large quantities such as banks, transportation companies, and casinos. Therefore Banknotes are an essential component in carrying out all activities every day, especially those related to finance. Through technological advancements such as scanners and copy machine, it can provide the opportunity for anyone to commit a crime. The crime is like a counterfeit banknote. Many people still find it difficult to distinguish between a genuine banknote ad counterfeit Banknote, that is because counterfeit Banknote produced have a high degree of resemblance to the genuine Banknote. Based on that background, authors want to do a classification process to distinguish between genuine Banknote and counterfeit Banknote. The classification process use methods Supervised Learning and compares the level of accuracy based on the distribution of training data. The methods of supervised Learning used are Support Vector Machine (SVM), K-Nearest Neighbor (K-NN), and Naïve Bayes. K-NN method is a method that has the highest specificity, sensitivity, and accuracy of the three methods used by the authors both in the training data of 30%, 50%, and 80%. Where in the training data 30% and 50% value specificity: 0.99, sensitivity: 1.00, accuracy: 0.99. While the 80% training data value specificity: 1.00, sensitivity: 1.00, accuracy: 1.00. This means that the distribution of training data influences the performance of the Supervised Machine Learning algorithm. In the KNN method, the greater the training data, the better the accuracy

    Dynamically reconfigurable bio-inspired hardware

    Get PDF
    During the last several years, reconfigurable computing devices have experienced an impressive development in their resource availability, speed, and configurability. Currently, commercial FPGAs offer the possibility of self-reconfiguring by partially modifying their configuration bitstream, providing high architectural flexibility, while guaranteeing high performance. These configurability features have received special interest from computer architects: one can find several reconfigurable coprocessor architectures for cryptographic algorithms, image processing, automotive applications, and different general purpose functions. On the other hand we have bio-inspired hardware, a large research field taking inspiration from living beings in order to design hardware systems, which includes diverse topics: evolvable hardware, neural hardware, cellular automata, and fuzzy hardware, among others. Living beings are well known for their high adaptability to environmental changes, featuring very flexible adaptations at several levels. Bio-inspired hardware systems require such flexibility to be provided by the hardware platform on which the system is implemented. In general, bio-inspired hardware has been implemented on both custom and commercial hardware platforms. These custom platforms are specifically designed for supporting bio-inspired hardware systems, typically featuring special cellular architectures and enhanced reconfigurability capabilities; an example is their partial and dynamic reconfigurability. These aspects are very well appreciated for providing the performance and the high architectural flexibility required by bio-inspired systems. However, the availability and the very high costs of such custom devices make them only accessible to a very few research groups. Even though some commercial FPGAs provide enhanced reconfigurability features such as partial and dynamic reconfiguration, their utilization is still in its early stages and they are not well supported by FPGA vendors, thus making their use difficult to include in existing bio-inspired systems. In this thesis, I present a set of architectures, techniques, and methodologies for benefiting from the configurability advantages of current commercial FPGAs in the design of bio-inspired hardware systems. Among the presented architectures there are neural networks, spiking neuron models, fuzzy systems, cellular automata and random boolean networks. For these architectures, I propose several adaptation techniques for parametric and topological adaptation, such as hebbian learning, evolutionary and co-evolutionary algorithms, and particle swarm optimization. Finally, as case study I consider the implementation of bio-inspired hardware systems in two platforms: YaMoR (Yet another Modular Robot) and ROPES (Reconfigurable Object for Pervasive Systems); the development of both platforms having been co-supervised in the framework of this thesis
    corecore