10 research outputs found

    LSI/VLSI design for testability analysis and general approach

    Get PDF
    The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented

    High level compilation for gate reconfigurable architectures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 205-215).A continuing exponential increase in the number of programmable elements is turning management of gate-reconfigurable architectures as "glue logic" into an intractable problem; it is past time to raise this abstraction level. The physical hardware in gate-reconfigurable architectures is all low level - individual wires, bit-level functions, and single bit registers - hence one should look to the fetch-decode-execute machinery of traditional computers for higher level abstractions. Ordinary computers have machine-level architectural mechanisms that interpret instructions - instructions that are generated by a high-level compiler. Efficiently moving up to the next abstraction level requires leveraging these mechanisms without introducing the overhead of machine-level interpretation. In this dissertation, I solve this fundamental problem by specializing architectural mechanisms with respect to input programs. This solution is the key to efficient compilation of high-level programs to gate reconfigurable architectures. My approach to specialization includes several novel techniques. I develop, with others, extensive bitwidth analyses that apply to registers, pointers, and arrays. I use pointer analysis and memory disambiguation to target devices with blocks of embedded memory. My approach to memory parallelization generates a spatial hierarchy that enables easier-to-synthesize logic state machines with smaller circuits and no long wires.(cont.) My space-time scheduling approach integrates the techniques of high-level synthesis with the static routing concepts developed for single-chip multiprocessors. Using DeepC, a prototype compiler demonstrating my thesis, I compile a new benchmark suite to Xilinx Virtex FPGAs. Resulting performance is comparable to a custom MIPS processor, with smaller area (40 percent on average), higher evaluation speeds (2.4x), and lower energy (18x) and energy-delay (45x). Specialization of advanced mechanisms results in additional speedup, scaling with hardware area, at the expense of power. For comparison, I also target IBM's standard cell SA-27E process and the RAW microprocessor. Results include sensitivity analysis to the different mechanisms specialized and a grand comparison between alternate targets.by Jonathan William Babb.Ph.D

    RTRLIB : a high-level modeling tool for dynamically partially reconfigurable systems

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2020.Reconfiguração dinâmica parcial é considerada uma interessante técnica a ser aplicada para o aumento da flexibilidade de sistemas implementados em FPGA, em função da implementação dinâmica de módulos de hardware enquanto o restante do circuito permanece em operação. Trata- se de uma técnica utilizada em sistemas com requisitos muito restritos, como adaptabilidade, robustez, consumo de potência, custo e tolerância à falhas. Entretanto, a complexidade de desen- volvimento de sistemas com reconfiguração dinâmica parcial é consideravelmente alta quando comparada à de sistemas com lógica totalmente estática. Nesse sentido, novas metodologias e ferramentas de desenvolvimento são necessárias para reduzir a complexidade de implementação desse tipo de sistema. Nesse contexto, esse trabalho apresenta o RTRLib, uma ferramenta de modelagem em alto nível para o desenvolvimento de sistemas com reconfiguração dinâmica parcial em dispositivos Xilinx Zynq a partir da especificação e parametrização de alguns blocos. Sob condições específi- cas, o RTRLib automaticamante produz os scripts de hardware e software para implementação da solução utilizando o Vivado Design Suite e o SDK. Tais scripts são compostos pelos comandos necessários para a implementação do sistema desde a criação do projeto de hardware até a criação do arquivo de boot. Uma vez que o RTRLib é composto por IP-Cores previamente caracterizados, a ferramenta também pode ser utilizada para a análise, em fase de modelagem, do sistema a ser implementado, por meio da estimação de características importantes do sistema, como o consumo de recursos e latência. O presente trabalho também inclui novas funcionalidades implementadas no RTRLib no con- texto do design de hardware e de software, como: generalização do script de hardware, mapea- mento de IO, floorplanning por meio de uma GUI, criação de um gerador de script de software, gerador de template de aplicação standalone que faz uso do partial reconfiguration controller (PRC) e implementação de uma biblioteca para aplicações FreeRTOS. Por fim, quatro estudos de casos foram implementados para demonstrar as funcionalidades da ferramenta: um sistema de classificação de terrenos baseado em redes neurais, um sistema com regressores lineares utilizado para controle de uma prótese miocinética de mão e, por último, uma aplicação hipotética de um sistema com requisitos de tempo real.Partial dynamic reconfiguration is considered an interesting technique to increase flexibility in FPGA designs due to the dynamic replacement of hardware modules while the remainder of the circuit remains in operation. It is used in systems with hard requirements such as adaptability, robustness, power consumption, cost, and fault-tolerance. However, the complexity to develop dynamically partially reconfigurable systems in considerably higher comparing with static de- signs. Therefore, new design methodologies and tools have been required to reduce the design complexity of such systems. In this context, this work presents the RTRLib, a high-level modeling tool for the development of dynamically reconfigurable systems on Xilinx Zynq devices by a simple system specification and parametrization of some blocks. Under specific conditions, RTRLib automatically generates the hardware and software scripts to implement the solution using Vivado and SDK. These scripts are composed by the sequential design steps from hardware project creation to the boot image elaboration. Since RTRLib is composed of pre-characterized IP-Cores, the tool also can be used to analyze the system behavior during the design process by the early estimation of essential characteristics of the system such as resource consumption and latency. The present work also includes the new functionalities implemented on RTRLib in the context of the hardware and the software design, such as: hardware script generalization, IO mapping, floorplanning by a GUI, software script creation, generator of a standalone template application that uses PRC, and implementation of a FreeRTOS library application. Finally, four case studies were implemented to demonstrate the tool capability: a system for terrain classification based on neuron networks, a linear regressor system used to control a myokinetic-based prosthetic hand, and a hypothetical real-time application

    Dynamically and partially reconfigurable hardware architectures for high performance microarray bioinformatics data analysis

    Get PDF
    The field of Bioinformatics and Computational Biology (BCB) is a multidisciplinary field that has emerged due to the computational demands of current state-of-the-art biotechnology. BCB deals with the storage, organization, retrieval, and analysis of biological datasets, which have grown in size and complexity in recent years especially after the completion of the human genome project. The advent of Microarray technology in the 1990s has resulted in the new concept of high throughput experiment, which is a biotechnology that measures the gene expression profiles of thousands of genes simultaneously. As such, Microarray requires high computational power to extract the biological relevance from its high dimensional data. Current general purpose processors (GPPs) has been unable to keep-up with the increasing computational demands of Microarrays and reached a limit in terms of clock speed. Consequently, Field Programmable Gate Arrays (FPGAs) have been proposed as a low power viable solution to overcome the computational limitations of GPPs and other methods. The research presented in this thesis harnesses current state-of-the-art FPGAs and tools to accelerate some of the most widely used data mining methods used for the analysis of Microarray data in an effort to investigate the viability of the technology as an efficient, low power, and economic solution for the analysis of Microarray data. Three widely used methods have been selected for the FPGA implementations: one is the un-supervised Kmeans clustering algorithm, while the other two are supervised classification methods, namely, the K-Nearest Neighbour (K-NN) and Support Vector Machines (SVM). These methods are thought to benefit from parallel implementation. This thesis presents detailed designs and implementations of these three BCB applications on FPGA captured in Verilog HDL, whose performance are compared with equivalent implementations running on GPPs. In addition to acceleration, the benefits of current dynamic partial reconfiguration (DPR) capability of modern Xilinx’ FPGAs are investigated with reference to the aforementioned data mining methods. Implementing K-means clustering on FPGA using non-DPR design flow has outperformed equivalent implementations in GPP and GPU in terms of speed-up by two orders and one order of magnitude, respectively; while being eight times more power efficient than GPP and four times more than a GPU implementation. As for the energy efficiency, the FPGA implementation was 615 times more energy efficient than GPPs, and 31 times more than GPUs. Over and above, the FPGA implementation outperformed the GPP and GPU implementations in terms of speed-up as the dimensionality of the Microarray data increases. Additionally, the DPR implementations of the K-means clustering have shown speed-up in partial reconfiguration time of ~5x and 17x over full chip reconfiguration for single-core and eight-core implementations, respectively. Two architectures of the K-NN classifier have been implemented on FPGA, namely, A1 and A2. The K-NN implementation based on A1 architecture achieved a speed-up of ~76x over an equivalent GPP implementation whereas the A2 architecture achieved ~68x speedup. Furthermore, the FPGA implementation outperformed the equivalent GPP implementation when the dimensionality of data was increased. In addition, The DPR implementations of the K-NN classifier have achieved speed-ups in reconfiguration time between ~4x to 10x over full chip reconfiguration when reconfiguring portion of the classifier or the complete classifier. Similar to K-NN, two architectures of the SVM classifier were implemented on FPGA whereby the former outperformed an equivalent GPP implementation by ~61x and the latter by ~49x. As for the DPR implementation of the SVM classifier, it has shown a speed-up of ~8x in reconfiguration time when reconfiguring the complete core or when exchanging it with a K-NN core forming a multi-classifier. The aforementioned implementations clearly show FPGAs to be an efficacious, efficient and economic solution for bioinformatics Microarrays data analysis

    Design and Implementation of a Scalable Hardware Platform for High Speed Optical Tracking

    Get PDF
    Optical tracking has been an important subject of research since several decades. The utilization of optical tracking systems can be found in a wide range of areas, including military, medicine, industry, entertainment, etc. In this thesis a complete hardware platform that targets high-speed optical tracking applications is presented. The implemented hardware system contains three main components: a high-speed camera which is equipped with a 1.3M pixel image sensor capable of operating at 500 frames per second, a CameraLink grabber which is able to interface three cameras, and an FPGA+Dual-DSP based image processing platform. The hardware system is designed using a modular approach. The flexible architecture enables to construct a scalable optical tracking system, which allows a large number of cameras to be used in the tracking environment. One of the greatest challenges in a multi-camera based optical tracking system is the huge amounts of image data that must be processed in real-time. In this thesis, the study on FPGA based high-speed image processing is performed. The FPGA implementation for a number of image processing operators is described. How to exploit different levels of parallelisms in the algorithm to achieve high processing throughput is explained in detail. This thesis also presents a new single-pass blob analysis algorithm. With an optimized FPGA implementation, the geometrical features of a large number of blobs can be calculated in real-time. At the end of this thesis, a prototype design which integrates all the implemented hardware and software modules is demonstrated to prove the usability of the proposed optical tracking system

    Design of Totally Fault Locatable Combinational Networks

    No full text

    A Realist Argument for the Self: Emotions and Consciousness in Self-Making

    Get PDF
    The question of the self, of what the self is (or even if there is a self) has been one that has grown alongside humanity, one that has haunted humanity, throughout our collective history. It is the purpose of this study to add to that questioning, and to attempt a small contribution to a field that has been as widely covered as it is perplexing. We will undertake this effort by firstly examining some common and representative accounts of the self and what they pertain, and with that as background we will move into the interdisciplinary areas of psychological and neuroscientific concerns regarding the self. We will discover the central role that emotions and intuition play in self formation and function. Applying those lessons philosophically we will build on our (hopefully achieved) foundation and offer a unique definition of the self. Thereby finding phenomenological matters to be of importance, we will next examine two self accounts from those quarters as possible objections to our own, and too conduct a review of phenomenological methodology. Taking that as guide we will explore consciousness and its relation to the self in some depth before finally proposing a metaphysical manner in which the self on our definition might be judged to be realist. With all of the preceding as grounding we will then analyze time for our self-view and suggest that if one’s self is to be a personal work – a creation – rather than an accident of happenstance then it is out of the perspective on time whence it will come into fruition. Throughout these necessarily broad but deeply interrelated considerations we will strive to maintain a practical approach and limit ourselves only to the human case
    corecore