8 research outputs found

    Generation of reconfigurable circuits from machine code

    Get PDF
    Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores. TelecomunicaçÔes. Universidade do Porto. Faculdade de Engenharia. 201

    Efficient Implementation of Particle Filters in Application-Specific Instruction-Set Processor

    Get PDF
    RÉSUMÉ Cette thĂšse considĂšre le problĂšme de l’implĂ©mentation de filtres particulaires (particle filters PFs) dans des processeurs Ă  jeu d’instructions spĂ©cialisĂ© (Application-Specific Instruction-set Processors ASIPs). ConsidĂ©rant la diversitĂ© et la complexitĂ© des PFs, leur implĂ©mentation requiert une grande efficacitĂ© dans les calculs et de la flexibilitĂ© dans leur conception. La conception de ASIPs peut se faire avec un niveau intĂ©ressant de flexibilitĂ©. Notre recherche se concentre donc sur l’amĂ©lioration du dĂ©bit des PFs dans un environnement de conception de ASIP. Une approche gĂ©nĂ©rale est tout d’abord proposĂ©e pour caractĂ©riser la complexitĂ© computationnelle des PFs. Puisque les PFs peuvent ĂȘtre utilisĂ©s dans une vaste gamme d’applications, nous utilisons deux types de blocs afin de distinguer les propriĂ©tĂ©s des PFs. Le premier type est spĂ©cifique Ă  l’application et le deuxiĂšme type est spĂ©cifique Ă  l’algorithme. Selon les rĂ©sultats de profilage, nous avons identifiĂ© que les blocs du calcul de la probabilitĂ© et du rĂ©Ă©chantillonnage sont les goulots d’étranglement principaux des blocs spĂ©cifiques Ă  l’algorithme. Nous explorons l’optimisation de ces deux blocs aux niveaux algorithmique et architectural. Le niveau algorithmique offre un grand potentiel d’accĂ©lĂ©ration et d’amĂ©lioration du dĂ©bit. Notre travail dĂ©bute donc Ă  ce niveau par l’analyse de la complexitĂ© des blocs du calcul de la probabilitĂ© et du rĂ©Ă©chantillonnage, puis continue avec leur simplification et modification. Nous avons simplifiĂ© le bloc du calcul de la probabilitĂ© en proposant un mĂ©canisme de quantification uniforme, l’algorithme UQLE. Les rĂ©sultats dĂ©montrent une amĂ©lioration significative d’une implĂ©mentation logicielle, sans perte de prĂ©cision. Le pire cas de l’algorithme UQLE implĂ©mentĂ© en logiciel Ă  virgule fixe avec 32 niveaux de quantification atteint une accĂ©lĂ©ration moyenne de 23.7× par rapport Ă  l’implĂ©mentation logicielle de l’algorithme ELE. Nous proposons aussi deux nouveaux algorithmes de rĂ©Ă©chantillonnage pour remplacer l’algorithme sĂ©quentiel de rĂ©Ă©chantillonnage systĂ©matique (SR) dans les PFs. Ce sont l’algorithme SR reformulĂ© et l’algorithme SR parallĂšle (PSR). L’algorithme SR reformulĂ© combine un groupe de boucles en une boucle unique afin de faciliter sa parallĂ©lisation dans un ASIP. L’algorithme PSR rend les itĂ©rations indĂ©pendantes, permettant ainsi Ă  l’algorithme de rĂ©Ă©chantillonnage de s’exĂ©cuter en parallĂšle. De plus, l’algorithme PSR a une complexitĂ© computationnelle plus faible que l’algorithme SR. Du point de vue architectural, les ASIPs offrent un grand potentiel pour l’implĂ©mentation de PFs parce qu’ils prĂ©sentent un bon Ă©quilibre entre l’efficacitĂ© computationnelle et la flexibilitĂ© de conception. Ils permettent des amĂ©liorations considĂ©rables en dĂ©bit par l’inclusion d’instructions spĂ©cialisĂ©es, tout en conservant la facilitĂ© relative de programmation de processeurs Ă  usage gĂ©nĂ©ral. AprĂšs avoir identifiĂ© les goulots d’étranglement de PFs dans les blocs spĂ©cifiques Ă  l’algorithme, nous avons gĂ©nĂ©rĂ© des instructions spĂ©cialisĂ©es pour les algorithmes UQLE, SR reformulĂ© et PSR. Le dĂ©bit a Ă©tĂ© significativement amĂ©liorĂ© par rapport Ă  une implĂ©mentation purement logicielle tournant sur un processeur Ă  usage gĂ©nĂ©ral. L’implĂ©mentation de l’algorithme UQLE avec instruction spĂ©cialisĂ©e avec 32 intervalles atteint une accĂ©lĂ©ration de 34× par rapport au pire cas de son implĂ©mentation logicielle, avec 3.75 K portes logiques additionnelles. Nous avons produit une implĂ©mentation de l’algorithme SR reformulĂ©, avec quatre poids calculĂ©s en parallĂšle et huit catĂ©gories dĂ©finies par des bornes uniformĂ©ment distribuĂ©es qui sont comparĂ©es simultanĂ©ment. Elle atteint une accĂ©lĂ©ration de 23.9× par rapport Ă  l’algorithme SR sĂ©quentiel dans un processeur Ă  usage gĂ©nĂ©ral. Le surcoĂ»t est limitĂ© Ă  54 K portes logiques additionnelles. Pour l’algorithme PSR, nous avons conçu quatre instructions spĂ©cialisĂ©es configurĂ©es pour supporter quatre poids entrĂ©s en parallĂšle. Elles mĂšnent Ă  une accĂ©lĂ©ration de 53.4× par rapport Ă  une implĂ©mentation de l’algorithme SR en virgule flottante sur un processeur Ă  usage gĂ©nĂ©ral, avec un surcoĂ»t de 47.3 K portes logiques additionnelles. Finalement, nous avons considĂ©rĂ© une application du suivi vidĂ©o et implĂ©mentĂ© dans un ASIP un algorithme de FP basĂ© sur un histogramme. Nous avons identifiĂ© le calcul de l’histogramme comme Ă©tant le goulot principal des blocs spĂ©cifiques Ă  l’application. Nous avons donc proposĂ© une architecture de calcul d’histogramme Ă  rĂ©seau parallĂšle (PAHA) pour ASIPs. Les rĂ©sultats d’implĂ©mentation dĂ©montrent qu’un PAHA Ă  16 voies atteint une accĂ©lĂ©ration de 43.75× par rapport Ă  une implĂ©mentation logicielle sur un processeur Ă  usage gĂ©nĂ©ral.----------ABSTRACT This thesis considers the problem of the implementation of particle filters (PFs) in Application-Specific Instruction-set Processors (ASIPs). Due to the diversity and complexity of PFs, implementing them requires both computational efficiency and design flexibility. ASIP design can offer an interesting degree of design flexibility. Hence, our research focuses on improving the throughput of PFs in this flexible ASIP design environment. A general approach is first proposed to characterize the computational complexity of PFs. Since PFs can be used for a wide variety of applications, we employ two types of blocks, which are application-specific and algorithm-specific, to distinguish the properties of PFs. In accordance with profiling results, we identify likelihood processing and resampling processing blocks as the main bottlenecks in the algorithm-specific blocks. We explore the optimization of these two blocks at the algorithmic and architectural levels. The algorithmic level is at a high level and therefore has a high potential to offer speed and throughput improvements. Hence, in this work we begin at the algorithm level by analyzing the complexity of the likelihood processing and resampling processing blocks, then proceed with their simplification and modification. We simplify the likelihood processing block by proposing a uniform quantization scheme, the Uniform Quantization Likelihood Evaluation (UQLE). The results show a significant improvement in performance without losing accuracy. The worst case of UQLE software implementation in fixed-point arithmetic with 32 quantized intervals achieves 23.7× average speedup over the software implementation of ELE. We also propose two novel resampling algorithms instead of the sequential Systematic Resampling (SR) algorithm in PFs. They are the reformulated SR and Parallel Systematic Resampling (PSR) algorithms. The reformulated SR algorithm combines a group of loops into a parallel loop to facilitate parallel implementation in an ASIP. The PSR algorithm makes the iterations independent, thus allowing the resampling algorithms to perform loop iterations in parallel. In addition, our proposed PSR algorithm has lower computational complexity than the SR algorithm. At the architecture level, ASIPs are appealing for the implementation of PFs because they strike a good balance between computational efficiency and design flexibility. They can provide considerable throughput improvement by the inclusion of custom instructions, while retaining the ease of programming of general-purpose processors. Hence, after identifying the bottlenecks of PFs in the algorithm-specific blocks, we describe customized instructions for the UQLE, reformulated SR, and PSR algorithms in an ASIP. These instructions provide significantly higher throughput when compared to a pure software implementation running on a general-purpose processor. The custom instruction implementation of UQLE with 32 intervals achieves 34× speedup over the worst case of its software implementation with 3.75 K additional gates. An implementation of the reformulated SR algorithm is evaluated with four weights calculated in parallel and eight categories defined by uniformly distributed numbers that are compared simultaneously. It achieves a 23.9× speedup over the sequential SR algorithm in a general-purpose processor. This comes at a cost of only 54 K additional gates. For the PSR algorithm, four custom instructions, when configured to support four weights input in parallel, lead to a 53.4× speedup over the floating-point SR implementation on a general-purpose processor at a cost of 47.3 K additional gates. Finally, we consider the specific application of video tracking, and an implementation of a histogram-based PF in an ASIP. We identify that the histogram calculation is the main bottleneck in the application-specific blocks. We therefore propose a Parallel Array Histogram Architecture (PAHA) engine for accelerating the histogram calculation in ASIPs. Implementation results show that a 16-way PAHA can achieve a speedup of 43.75× when compared to its software implementation in a general-purpose processor

    Increasing the efficacy of automated instruction set extension

    Get PDF
    The use of Instruction Set Extension (ISE) in customising embedded processors for a specific application has been studied extensively in recent years. The addition of a set of complex arithmetic instructions to a baseline core has proven to be a cost-effective means of meeting design performance requirements. This thesis proposes and evaluates a reconfigurable ISE implementation called “Configurable Flow Accelerators” (CFAs), a number of refinements to an existing Automated ISE (AISE) algorithm called “ISEGEN”, and the effects of source form on AISE. The CFA is demonstrated repeatedly to be a cost-effective design for ISE implementation. A temporal partitioning algorithm called “staggering” is proposed and demonstrated on average to reduce the area of CFA implementation by 37% for only an 8% reduction in acceleration. This thesis then turns to concerns within the ISEGEN AISE algorithm. A methodology for finding a good static heuristic weighting vector for ISEGEN is proposed and demonstrated. Up to 100% of merit is shown to be lost or gained through the choice of vector. ISEGEN early-termination is introduced and shown to improve the runtime of the algorithm by up to 7.26x, and 5.82x on average. An extension to the ISEGEN heuristic to account for pipelining is proposed and evaluated, increasing acceleration by up to an additional 1.5x. An energyaware heuristic is added to ISEGEN, which reduces the energy used by a CFA implementation of a set of ISEs by an average of 1.6x, up to 3.6x. This result directly contradicts the frequently espoused notion that “bigger is better” in ISE. The last stretch of work in this thesis is concerned with source-level transformation: the effect of changing the representation of the application on the quality of the combined hardwaresoftware solution. A methodology for combined exploration of source transformation and ISE is presented, and demonstrated to improve the acceleration of the result by an average of 35% versus ISE alone. Floating point is demonstrated to perform worse than fixed point, for all design concerns and applications studied here, regardless of ISEs employed

    An Architecture Framework for an Adaptive Extensible Processor

    No full text
    NGArch (Next Generation Architecture) Forum 2007 : ć­ŠèĄ“ç·ćˆă‚»ăƒłă‚żăƒŒ, 東

    An Architecture Framework for an Adaptive Extensible Processor

    No full text
    NGArch (Next Generation Architecture) Forum 2007 : ć­ŠèĄ“ç·ćˆă‚»ăƒłă‚żăƒŒ, 東
    corecore