88 research outputs found

    Agnostic Validation Test Bench For Efuse Connectivity Verification

    Get PDF
    In semiconductor industry, validation is an important process to discover design bugs and have it fixed before the product is released. Semiconductor integrated circuit is normally refreshed in yearly cadence and it is crucial to have a short design and validation cycle, without compromising the product quality. Nowadays, validation process often becomes the bottleneck for product readiness. Integrated circuit validation flow has to be improved in order to keep up with the advancement of integrated circuit design flow. In this work, an improvement method on validation flow is discussed, with particular focus on eFUSE (Electric FUSE) connectivity validation. eFUSE is a feature available in integrated circuit which functions as a central storage for important ‘settings’, and distribute them during system boot up process. eFUSE connectivity validation is needed to ensure each intellectual property is able to retrieve the correct eFUSE value. In this work, the concept of agnostic validation test bench for eFUSE connectivity validation is developed and tested the idea of it is to eliminate manual test development effort, improves validation efficiency and promotes reusability across different projects. By using this methodology, eFUSE connectivity validation time is reduced significantly and recorded an improvement of 28%. There is also an average improvement of 65% in eFUSE coverage percentage. In summary, the eFUSE connectivity validation time frame is shortened, without compromising the test quality

    A Survey on FPGA-Based Heterogeneous Clusters Architectures

    Get PDF
    In recent years, the most powerful supercomputers have already reached megawatt power consumption levels, an important issue that challenges sustainability and shows the impossibility of maintaining this trend. To this date, the prevalent approach to supercomputing is dominated by CPUs and GPUs. Given their fixed architectures with generic instruction sets, they have been favored with lots of tools and mature workflows which led to mass adoption and further growth. However, reconfigurable hardware such as FPGAs has repeatedly proven that it offers substantial advantages over this supercomputing approach concerning performance and power consumption. In this survey, we review the most relevant works that advanced the field of heterogeneous supercomputing using FPGAs focusing on their architectural characteristics. Each work was divided into three main parts: network, hardware, and software tools. All implementations face challenges that involve all three parts. These dependencies result in compromises that designers must take into account. The advantages and limitations of each approach are discussed and compared in detail. The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines

    HyperFPGA: SoC-FPGA Cluster Architecture for Supercomputing and Scientific applications

    Get PDF
    Since their inception, supercomputers have addressed problems that far exceed those of a single computing device. Modern supercomputers are made up of tens of thousands of CPUs and GPUs in racks that are interconnected via elaborate and most of the time ad hoc networks. These large facilities provide scientists with unprecedented and ever-growing computing power capable of tackling more complex and larger problems. In recent years, the most powerful supercomputers have already reached megawatt power consumption levels, an important issue that challenges sustainability and shows the impossibility of maintaining this trend. With more pressure on energy efficiency, an alternative to traditional architectures is needed. Reconfigurable hardware, such as FPGAs, has repeatedly been shown to offer substantial advantages over the traditional supercomputing approach with respect to performance and power consumption. In fact, several works that advanced the field of heterogeneous supercomputing using FPGAs are described in this thesis \cite{survey-2002}. Each cluster and its architectural characteristics can be studied from three interconnected domains: network, hardware, and software tools, resulting in intertwined challenges that designers must take into account. The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines, which in turn served as inspiration and background for the HyperFPGA. In this thesis, the HyperFPGA cluster is presented as a way to build scalable SoC-FPGA platforms to explore new architectures for improved performance and energy efficiency in high-performance computing, focusing on flexibility and openness. The HyperFPGA is a modular platform based on a SoM that includes power monitoring tools with high-speed general-purpose interconnects to offer a great level of flexibility and introspection. By exploiting the reconfigurability and programmability offered by the HyperFPGA infrastructure, which combines FPGAs and CPUs, with high-speed general-purpose connectors, novel computing paradigms can be implemented. A custom Linux OS and drivers, along with a custom script for hardware definition, provide a uniform interface from application to platform for a programmable framework that integrates existing tools. The development environment is demonstrated using the N-Queens problem, which is a classic benchmark for evaluating the performance of parallel computing systems. Overall, the results of the HyperFPGA using the N-Queens problem highlight the platform's ability to handle computationally intensive tasks and demonstrate its suitability for its use in supercomputing experiments.Since their inception, supercomputers have addressed problems that far exceed those of a single computing device. Modern supercomputers are made up of tens of thousands of CPUs and GPUs in racks that are interconnected via elaborate and most of the time ad hoc networks. These large facilities provide scientists with unprecedented and ever-growing computing power capable of tackling more complex and larger problems. In recent years, the most powerful supercomputers have already reached megawatt power consumption levels, an important issue that challenges sustainability and shows the impossibility of maintaining this trend. With more pressure on energy efficiency, an alternative to traditional architectures is needed. Reconfigurable hardware, such as FPGAs, has repeatedly been shown to offer substantial advantages over the traditional supercomputing approach with respect to performance and power consumption. In fact, several works that advanced the field of heterogeneous supercomputing using FPGAs are described in this thesis \cite{survey-2002}. Each cluster and its architectural characteristics can be studied from three interconnected domains: network, hardware, and software tools, resulting in intertwined challenges that designers must take into account. The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines, which in turn served as inspiration and background for the HyperFPGA. In this thesis, the HyperFPGA cluster is presented as a way to build scalable SoC-FPGA platforms to explore new architectures for improved performance and energy efficiency in high-performance computing, focusing on flexibility and openness. The HyperFPGA is a modular platform based on a SoM that includes power monitoring tools with high-speed general-purpose interconnects to offer a great level of flexibility and introspection. By exploiting the reconfigurability and programmability offered by the HyperFPGA infrastructure, which combines FPGAs and CPUs, with high-speed general-purpose connectors, novel computing paradigms can be implemented. A custom Linux OS and drivers, along with a custom script for hardware definition, provide a uniform interface from application to platform for a programmable framework that integrates existing tools. The development environment is demonstrated using the N-Queens problem, which is a classic benchmark for evaluating the performance of parallel computing systems. Overall, the results of the HyperFPGA using the N-Queens problem highlight the platform's ability to handle computationally intensive tasks and demonstrate its suitability for its use in supercomputing experiments

    Implementation Of Modular Testing In Bios Development And Debug

    Get PDF
    This project presents a modular approach in BIOS development in purpose to tackle the problem of long development time with the existing methodologies which are hardware platform approach and virtual platform approach. The proposed approach consists of previous generation platform, a FPGA card and UEFI drivers. The FPGA is loaded with the RTL of one Intellectual Property (IP) from the current company project. The chosen IP is Low Power Subsystem (LPSS). The card is then plugged into the PCI slot of the platform. Besides, UEFI Configuration Driver and UEFI Reset Driver are built to configure and reset the LPSS registers respectively. Both of them are stored into a thumb drive and plugged into USB port of the platform. They are executed in the UEFI Shell environment. In this project, the development time of LPSS needed by the three methodologies which are hardware platform approach, virtual platform approach and modular approach are compared. The results indicate that modular approach is capable to save up to 90% of the development time in comparison with the other two approaches. At the same time, both of the UEFI drivers are functioning correctly. The processing time of both of the UEFI Configuration Driver and UEFI Reset Driver are about 1 to 2 seconds only. In conclusion, the novelty of the modular approach is that the BIOS can be developed in modular basis, without having to develop the BIOS as a whole. Therefore, it is able to cut down the BIOS development time efficientl

    Operating systems data transfer optimization

    Get PDF
    Proyecto de Graduación (Maestría en Ingeniería en Computación) Instituto Tecnológico de Costa Rica, Escuela de Ingeniería en Computación, 2018.Balancing algorithms challenge the state of the art on how data exchanges as messages between programs that execute in the kernel and the applications running on top in user space on a modern Operating System. There is always a possibility to improve the way applications that rely on different spaces in an Operating System can interact. Algorithms must be placed in the picture all the time when thinking about next-generation human interaction problems and which solutions they require. Artificial Intelligence, Computer Vision, Internet of Things, Autonomous Driving are all data-centric applications to solve the next human issues that require data to be transported efficiently and fast between different programs, no matter whether they reside in the kernel or in user space. Chip designs and physical boundaries are putting pressure on software solutions that can virtualize and optimize how data is exchanged. This research proposes to demonstrate - via experimentation techniques, designs, measurement and simulation - that in-place solutions for data optimization transfer between applications residing in different Operating System spaces can be compared and revised to improve their performance towards a data-centric technology world. Specifically, it explores the use of a simulated environment to create a set of archetypical scenarios using an experimental design which demonstrates that PF_RING optimizes data messages exchange between Operating System kernel and user space applications

    Implementation of Ultra-Low Latency and High-Speed Communication Channels for an FPGA-Based HPC Cluster

    Get PDF
    RÉSUMÉ Les clusters basés sur les FPGA bénéficient de leur flexibilité et de leurs performances en termes de puissance de calcul et de faible consommation. Et puisque la consommation de puissance devient un élément de plus en plus importants sur le marché des superordinateurs, le domaine d’exploration multi-FPGA devient chaque année plus populaire. Les performances des ordinateurs n’ont jamais cessé d’augmenter mais la latence des réseaux d’interconnexion n’a pas suivi leur taux d’amélioration. Dans le but d’augmenter le niveau d’abstraction et les fonctionnalités des interconnexions, la complexité des piles de communication atteinte à nos jours engendre des coûts et affecte la latence des communications, ce qui rend ces piles de communication très souvent inefficaces, voire inutiles. Les protocoles de communication commerciaux existants et les contrôleurs d’interfaces réseau FPGA-FPGA n’ont la performance pour supporter ni les applications à temps critique ni un partitionnement étroitement couplé des systèmes sur puce. Au lieu de cela, les approches de communication personnalisées sont souvent préférées. Dans ce travail, nous proposons une implémentation de canaux de communication à haut débit et à faible latence pour une grappe de FPGA. Le système est constitué de deux BEE3, chacun contenant 4 FPGA de la famille Virtex-5 interconnectés par une topologie en anneau. Notre approche exploite la technologie à transducteur à plusieurs gigabits par seconde pour l’obtention d’une bande passante fiable de 8Gbps. Le module de propriété intellectuelle (IP) de communication proposé permet le transfert de données entre des milliers de coprocesseurs sur le réseau, grâce à l’implémentation d’un réseau direct avec capacité de routage de paquets. Les résultats expérimentaux ont montré une latence de seulement 34 cycles d’horloge entre deux noeuds voisins, ce qui est un des plus bas parmi ceux rapportés dans la littérature. En outre, nous proposons une architecture adaptée au calcul à haute performance qui comporte un traitement extensible, parallèle et distribué. Pour une plateforme à 8 FPGA, l’architecture fournit 35.6Go/s de bande passante effective pour la mémoire externe, une bande passante globale de réseau de 128Gbps et une puissance de calcul de 8.9GFLOPS. Un solveur matrice-vecteur de grande taille est partitionné et mis en oeuvre à travers le cluster. Nous avons obtenu une performance et une efficacité de calcul concurrentielles grâce à la faible empreinte du protocole de communication entre les éléments de traitement distribués. Ce travail contribue à soutenir de nouvelles recherches dans le domaine du calcul parallèle intensif et permet le partitionnement de système sur puce à grande taille sur des clusters à base de FPGA.----------ABSTRACT An FPGA-based cluster profits from the flexibility and the performance potential FPGA technology provides. Since price and power consumption are becoming increasingly important elements in the High-Performance Computing market, the multi-FPGA exploration field is getting more popular each year. Network latency has failed to keep up with other improvements in computer performance. Complex communication stacks have sacrificed latency and increased overhead to achieve other goals, being in most of the time inefficient and unnecessary. The existing commercial offthe- shelf communication protocols and Network Interfaces Controllers for FPGA-to-FPGA interconnection lack of performance to support time-critical applications and tightly coupled System-on-Chip partitioning. Instead, custom communication approaches are preferred. In this work, ultra-low latency and high-speed communication channels for an FPGA-based cluster are presented. Two BEE3s grouping 8 FPGAs Virtex-5 interconnected in a ring topology, compose the targeting platform. Our approach exploits Multi-Gigabit Transceiver technology to achieve reliable 8Gbps channel bandwidth. The proposed communication IP supports data transfer from coprocessors over the network, by means of a direct network implementation with hop-by-hop packet routing capability. Experimental results showed a latency of only 34 clock cycles between two neighboring nodes, being one of the lowest in the literature. In addition, it is proposed an architecture suitable for High-Performance Computing which includes performing scalable, parallel, and distributed processing. For an 8 FPGAs platform, the architecture provides 35.6GB/s off-chip memory throughput, 128Gbps network aggregate bandwidth, and 8.9GFLOPS computing power. A large and dense matrix-vector solver is partitioned and implemented across the cluster. We achieved competitive performance and computational efficiency as a result of the low communication overhead among the distributed processing elements. This work contributes to support new researches on the intense parallel computing fields, and enables large System-on-Chip partitioning and scaling on FPGA-based clusters

    Digital Forensics and Born-Digital Content in Cultural Heritage Collections

    Get PDF
    Digital Forensics and Born-Digital Content in Cultural Heritage Collections examines digital forensics and its relevance for contemporary research. The applicability of digital forensics to archivists, curators, and others working within our cultural heritage is not necessarily intuitive. When the shared interests of digital forensics and responsibilities associated with securing and maintaining our cultural legacy are identified—preservation, extraction, documentation, and interpretation, as this report details—the correspondence between these fields of study becomes logical and compelling.Council on Library and Information Resource

    Development of Trigger and Control Systems for CMS

    Get PDF
    During the year of 2007, the Large Hadron Collider (LHC) and its four main detectors will begin operation with a view to answering the most pressing questions in particle physics. However before one can analyse the data produced to find the rare phenomena being looked for, both the detector and readout electronics must be thoroughly tested to ensure that the system will operate in a consistent way. The Compact Muon Solenoid (CMS) is one of the two general-purpose detectors at CERN. The tracking component of the design produces more data than any previous detector used in particle physics, with approximately ten million detector channels. The data from the detector is processed by the tracker Front End Driver (FED). The large data volume necessitated the development of a buffering and throttling system to prevent buffer overflow both on and off the detector. A critical component of this system is the APV emulator (APVe), which vetoes trigger decisions based on buffer status in the tracker. The commissioning of these components, along with a large part of the Timing, Trigger and Control (TTC) system is discussed, including the various modifications that were made to improve the robustness of the full system. Another key piece of the CMS electronics is the calorimeter trigger system, responsible for identifying âinteresting' physical events in a background of well-understood phenomena using calorimetric information. Calorimeter information is processed to identify various trigger objects by the Global Calorimeter Trigger (GCT). The first component of this system is the Source card, which has been developed to transfer data from the Regional Calorimeter Trigger (RCT) to the Leaf card, the processing engine of the GCT. The use of modern programmable logic with high speed optical links is discussed, emphasising its use for data concentration and the benefit it confers to the processing algorithms. Looking forward to Super-LHC, a possible addition to the CMS Level-1 trigger system is discussed, incorporating information from a new pixel detector with an alternative stacked geometry that allows the possibility of on-detector data rate reduction by means of a transverse momentum cut. A toy Monte Carlo was developed to study detector performance. Issues with high-speed reconstruction and the complications of on-detector data rate reduction are also discussed
    corecore