215 research outputs found

    Pipelined genetic propagation

    Get PDF
    © 2015 IEEE.Genetic Algorithms (GAs) are a class of numerical and combinatorial optimisers which are especially useful for solving complex non-linear and non-convex problems. However, the required execution time often limits their application to small-scale or latency-insensitive problems, so techniques to increase the computational efficiency of GAs are needed. FPGA-based acceleration has significant potential for speeding up genetic algorithms, but existing FPGA GAs are limited by the generational approaches inherited from software GAs. Many parts of the generational approach do not map well to hardware, such as the large shared population memory and intrinsic loop-carried dependency. To address this problem, this paper proposes a new hardware-oriented approach to GAs, called Pipelined Genetic Propagation (PGP), which is intrinsically distributed and pipelined. PGP represents a GA solver as a graph of loosely coupled genetic operators, which allows the solution to be scaled to the available resources, and also to dynamically change topology at run-time to explore different solution strategies. Experiments show that pipelined genetic propagation is effective in solving seven different applications. Our PGP design is 5 times faster than a recent FPGA-based GA system, and 90 times faster than a CPU-based GA system

    Object-oriented domain specific compilers for programming FPGAs

    No full text
    Published versio

    Parallelization of SAT on Reconfigurable Hardware

    Full text link
    Quoique très difficile à résoudre, le problème de satisfiabilité Booléenne (SAT) est fréquemment utilisé lors de la modélisation d’applications industrielles. À cet effet, les deux dernières décennies ont vu une progression fulgurante des outils conçus pour trouver des solutions à ce problème NP-complet. Deux grandes avenues générales ont été explorées afin de produire ces outils, notamment l’approche logicielle et matérielle. Afin de raffiner et améliorer ces solveurs, de nombreuses techniques et heuristiques ont été proposées par la communauté de recherche. Le but final de ces outils a été de résoudre des problèmes de taille industrielle, ce qui a été plus ou moins accompli par les solveurs de nature logicielle. Initialement, le but de l’utilisation du matériel reconfigurable a été de produire des solveurs pouvant trouver des solutions plus rapidement que leurs homologues logiciels. Cependant, le niveau de sophistication de ces derniers a augmenté de telle manière qu’ils restent le meilleur choix pour résoudre SAT. Toutefois, les solveurs modernes logiciels n’arrivent toujours pas a trouver des solutions de manière efficace à certaines instances SAT. Le but principal de ce mémoire est d’explorer la résolution du problème SAT dans le contexte du matériel reconfigurable en vue de caractériser les ingrédients nécessaires d’un solveur SAT efficace qui puise sa puissance de calcul dans le parallélisme conféré par une plateforme FPGA. Le prototype parallèle implémenté dans ce travail est capable de se mesurer, en termes de vitesse d’exécution à d’autres solveurs (matériels et logiciels), et ce sans utiliser aucune heuristique. Nous montrons donc que notre approche matérielle présente une option prometteuse vers la résolution d’instances industrielles larges qui sont difficilement abordées par une approche logicielle.Though very difficult to solve, the Boolean satisfiability problem (SAT) is extensively used to model various real-world applications and problems. Over the past two decades, researchers have tried to provide tools that are used, to a certain degree, to find solutions to the Boolean satisfiability problem. The nature of these tools is broadly divided in software and reconfigurable hardware solvers. In addition, the main algorithms used to solve this problem have also been complemented with heuristics of various levels of sophistication to help overcome some of the NP-hardness of the problem. The end goal of these tools has been to provide solutions to industrial-sized problems of enormous size. Initially, reconfigurable hardware tools provided a promising avenue to accelerating SAT solving over traditional software based solutions. However, the level of sophistication of software solvers overcame their hardware counterparts, which remained limited to smaller problem instances. Even so, modern state-of-the-art software solvers still fail unpredictably on some instances. The main focus of this thesis is to explore solving SAT on reconfigurable hardware in order to gain an understanding of what would be essential ingredients to add (and discard) to a very efficient hardware SAT solver that obtains its processing power from the raw parallelism of an FPGA platform. The parallel prototype solver that was implemented in this work has been found to be comparable with other hardware and software solvers in terms of execution speed even though no heuristics or other helping techniques were implemented. We thus show that our approach provides a very promising avenue to solving large, industrial SAT instances that might be difficult to handle by software solvers

    Hardware Acceleration of Electronic Design Automation Algorithms

    Get PDF
    With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms

    Synthesis of FPGA-based accelerators implementing recursive algorithms

    Get PDF
    Doutoramento em Engenharia InformáticaO desenvolvimento de sistemas computacionais é um processo complexo, com múltiplas etapas, que requer uma análise profunda do problema, levando em consideração as limitações e os requisitos aplicáveis. Tal tarefa envolve a exploração de técnicas alternativas e de algoritmos computacionais para optimizar o sistema e satisfazer os requisitos estabelecidos. Neste contexto, uma das mais importantes etapas é a análise e implementação de algoritmos computacionais. Enormes avanços tecnológicos no âmbito das FPGAs (Field-Programmable Gate Arrays) tornaram possível o desenvolvimento de sistemas de engenharia extremamente complexos. Contudo, o número de transístores disponíveis por chip está a crescer mais rapidamente do que a capacidade que temos para desenvolver sistemas que tirem proveito desse crescimento. Esta limitação já bem conhecida, antes de se revelar com FPGAs, já se verificava com ASICs (Application-Specific Integrated Circuits) e tem vindo a aumentar continuamente. O desenvolvimento de sistemas com base em FPGAs de alta capacidade envolve uma grande variedade de ferramentas, incluindo métodos para a implementação eficiente de algoritmos computacionais. Esta tese pretende proporcionar uma contribuição nesta área, tirando partido da reutilização, do aumento do nível de abstracção e de especificações algorítmicas mais automatizadas e claras. Mais especificamente, é apresentado um estudo que foi levado a cabo no sentido de obter critérios relativos à implementação em hardware de algoritmos recursivos versus iterativos. Depois de serem apresentadas algumas das estratégias para implementar recursividade em hardware mais significativas, descreve-se, em pormenor, um conjunto de algoritmos para resolver problemas de pesquisa combinatória (considerados enquanto exemplos de aplicação). Versões recursivas e iterativas destes algoritmos foram implementados e testados em FPGA. Com base nos resultados obtidos, é feita uma cuidada análise comparativa. Novas ferramentas e técnicas de investigação que foram desenvolvidas no âmbito desta tese são também discutidas e demonstradas.Design of computational systems is a complex multistage process which requires a deep analysis of the problem, taking into account relevant limitations and constraints as well as software/hardware co-design. Such task involves exploring competitive techniques and computational algorithms, enabling the system to be optimized while satisfying given requirements. In this context, one of the most important stages is analysis and implementation of computational algorithms. Tremendous progress in the scope of FPGA (Field-Programmable Gate Array) technology has made it possible to design very complicated engineering systems. However, the number of available transistors grows faster than the ability to meaningfully design with them. This situation is a well known design productivity gap, which was inherited by FPGA from ASIC (Application-Specific Integrated Circuit) and which is increasing continuously. Developing engineering systems on the basis of high capacity FPGAs involves a wide variety of design tools, including methods for efficient implementation of computational algorithms. The thesis is intended to provide a contribution in this area by aiming at reuse, high level abstraction, automation, and clearness of algorithmic specifications. More specifically, it presents research studies which have been carried out in order to obtain criteria regarding implementation of recursive vs. iterative algorithms in hardware. After describing some of the most relevant strategies for implementing recursion in hardware, a selection of algorithms for solving combinatorial search problems (considered as application examples) are also described in detail. Iterative and recursive versions of these algorithms have been implemented and tested in FPGA. Taking into consideration the results obtained, a careful comparative analysis is given. New research-oriented tools and techniques for hardware design which have been developed in the scope of this thesis are also discussed and demonstrated

    Counterexample Guided Abstraction Refinement with Non-Refined Abstractions for Multi-Agent Path Finding

    Full text link
    Counterexample guided abstraction refinement (CEGAR) represents a powerful symbolic technique for various tasks such as model checking and reachability analysis. Recently, CEGAR combined with Boolean satisfiability (SAT) has been applied for multi-agent path finding (MAPF), a problem where the task is to navigate agents from their start positions to given individual goal positions so that the agents do not collide with each other. The recent CEGAR approach used the initial abstraction of the MAPF problem where collisions between agents were omitted and were eliminated in subsequent abstraction refinements. We propose in this work a novel CEGAR-style solver for MAPF based on SAT in which some abstractions are deliberately left non-refined. This adds the necessity to post-process the answers obtained from the underlying SAT solver as these answers slightly differ from the correct MAPF solutions. Non-refining however yields order-of-magnitude smaller SAT encodings than those of the previous approach and speeds up the overall solving process making the SAT-based solver for MAPF competitive again in relevant benchmarks

    Effective SAT solving

    Get PDF
    A growing number of problem domains are successfully being tackled by SAT solvers. This thesis contributes to that trend by pushing the state-of-the-art of core SAT algorithms and their implementation, but also in several important application areas. It consists of five papers: the first details the implementation of the SAT solver MiniSat and the other four papers discuss specific issues related to different application domains. In the first paper, catering to the trend of extending and adapting SAT solvers, we present a detailed description of MiniSat, a SAT solver designed for that particular purpose. The description additionally bridges a gap between theory and practice, serving as a tutorial on modern SAT solving algorithms. Among other things, we describe how to solve a series of related SAT problems efficiently, called incremental SAT solving. For finding finite first order models the MACE-style method that is based on SAT solving is well-known. In the second paper we improve the basic method with several techniques that can be loosely classified as either transformations that make the reduction to SAT result in fewer clauses or techniques that are designed to speed up the search of the SAT solver. The resulting tool, called Paradox, won the SAT/Models division of the CASC competition in 2003 and has not been beaten since by a single general purpose model finding tool. In the last decade the interest in methods for safety property verification that are based on SAT solving has been steadily growing. One example of such a method is temporal induction. The method requires a sequence of increasingly stronger induction proofs to be performed. In the third paper we show how this sequence of proofs can be solved efficiently using incremental SAT solving. The last two papers consider two frequently occurring types of encodings: (1) the problem of encoding circuits into CNF, and (2) encoding 0-1 integer linear programming into CNF and how to use incremental SAT to solve the intended ptimization problem. There are several encoding patterns that occur over and over again in this thesis but also elsewhere. The most noteworthy are: incremental SAT, lazy encoding of constraints, and bit-wise encoding of arithmetic influenced by hardware designs for adders and multipliers. The general conclusion is: deploying SAT solvers effectively requires implementations that are efficient, yet easily adaptable to specific application needs. Moreover, to get the best results, it is worth spending effort to make sure that one uses the best codings possible for an application. However, it is important to note that this is not absolutely necessary. For some applications naive problem codings work just fine which is indeed part of the appeal of using SAT solving

    A SAT-based approach for index calculus on binary elliptic curves

    Get PDF
    Logical cryptanalysis, first introduced by Massacci in 2000, is a viable alternative to common algebraic cryptanalysis techniques over boolean fields. With XOR operations being at the core of many cryptographic problems, recent research in this area has focused on handling XOR clauses efficiently. In this paper, we investigate solving the point decomposition step of the index calculus method for prime degree extension fields F2n\mathbb{F}_{2^n}, using SAT solving methods. We experimented with different SAT solvers and decided on using WDSat, a solver dedicated to this specific problem. We extend this solver by adding a novel breaking symmetry technique and optimizing the time complexity of the point decomposition step by a factor of m!m! for the (m+1)(m+1)\textsuperscript{th} Semaev\u27s summation polynomial. While asymptotically solving the point decomposition problem with this method has exponential worst time complexity in the dimension ll of the vector space defining the factor base, experimental running times show that the the presented SAT solving technique is significantly faster than current algebraic methods based on Gröbner basis computation. For the values ll and nn considered in the experiments, the WDSat solver coupled with our breaking symmetry technique is up to 300 times faster then MAGMA\u27s F4 implementation, and this factor grows with ll and nn

    SAT and CP: Parallelisation and Applications

    Get PDF
    This thesis is considered with the parallelisation of solvers which search for either an arbitrary, or an optimum, solution to a problem stated in some formal way. We discuss the parallelisation of two solvers, and their application in three chapters.In the first chapter, we consider SAT, the decision problem of propositional logic, and algorithms for showing the satisfiability or unsatisfiability of propositional formulas. We sketch some proof-theoretic foundations which are related to the strength of different algorithmic approaches. Furthermore, we discuss details of the implementations of SAT solvers, and show how to improve upon existing sequential solvers. Lastly, we discuss the parallelisation of these solvers with a focus on clause exchange, the communication of intermediate results within a parallel solver. The second chapter is concerned with Contraint Programing (CP) with learning. Contrary to classical Constraint Programming techniques, this incorporates learning mechanisms as they are used in the field of SAT solving. We present results from parallelising CHUFFED, a learning CP solver. As this is both a kind of CP and SAT solver, it is not clear which parallelisation approaches work best here. In the final chapter, we will discuss Sorting networks, which are data oblivious sorting algorithms, i. e., the comparisons they perform do not depend on the input data. Their independence of the input data lends them to parallel implementation. We consider the question how many parallel sorting steps are needed to sort some inputs, and present both lower and upper bounds for several cases
    • …
    corecore