9 research outputs found

    Researching methods for efficient hardware specification, design and implementation of a next generation communication architecture

    Full text link
    The objective of this work is to create and implement a System Area Network (SAN) architecture called EXTOLL embedded in the current world of systems, software and standards based on the experiences obtained during the ATOLL project development and test. The topics of this work also cover system design methodology and educational issues in order to provide appropriate human resources and work premises. The scope of this work in the EXTOLL SAN project was: • the Xbar architecture and routing (multi-layer routing, virtual channels and their arbitration, routing formats, dead lock aviodance, debug features, automation of reuse) • the on-chip module communication architecture and parts of the host communication • the network processor architecture and integration • the development of the design methodology and the creation of the design flow • the team education and work structure. In order to successfully leverage student know-how and work flow methodology for this research project the SEED curricula changes has been governed by the Hochschul Didaktik Zentrum resulting in a certificate for "Hochschuldidaktik" and excellence in university education. The complexity of the target system required new approaches in concurrent Hardware/Software codesign. The concept of virtual hardware prototypes has been established and excessively used during design space exploration and software interface design

    매니코어 NoC 아키텍처에 대한 고속 사이클-근사 시뮬레이션 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 하순회.Simulation is a software technique that uses the current available architecture to prototype a future architecture. In computer architecture research, simulation techniques are one of the most important skills. Simulation techniques enable us to obtain important performance indicators of new architectures and to perform the design space exploration using these metrics. Furthermore, the simulator enables rapid software development and optimization on the architecture that does not exist. Despite various known problems, such as slow speed or coverage issue, the reliance on simulation technology in computer architecture research continues to increase. As the density of transistor increases and the performance improvement of the single core hits the ceiling, the newly constructed architectures usually consist of multi/many cores with the network-on-chip, which enables scalable communications. In addition, the implementation of the application itself has also been complicated to effectively utilize these parallel architectures. Thus, simulators for parallel architectures and parallel applications have become extremely complex, and existing sequential simulators no longer simulate these systems at a realistic time. While many of parallel simulation techniques are being developed to solve these problems, they suffer from poor simulation performance or accuracy. In this thesis, we propose and evaluate a novel many-core simulation technique that can obtain the best simulation performance at the cost of minimum simulation error. The proposed parallel many-core simulator is divided into three parts: 1) core simulator, 2) network-on-chip simulator, and 3) simulation backplane. Each core is executed by a core simulator, which communicates with the external simulation backplane via the Interprocess Communication (IPC). Each core simulation is performed individually in a separate host processor. The simulation backplane arranges messages from each core into chronological order, passes them to destination modules, and simulates hardware components other than cores. If the simulation backplane generates a request requiring NoC communication, this request is forwarded to the network simulator and is simulated at the most accurate accuracy level. In this thesis, we proposed a novel core simulation model, which combined analytical and sampled simulations. The core simulator presents 11.36 to 44.31 MIPS performance, while the simulation error is approximately 8 percent. The standalone core simulator is released as an open-source. We confirmed that NoC simulation has a great effect on the reliability of outputs generated from many-core simulation. First, existing flit-level NoC simulators were analyzed at source-code level. Based on the observations, various implementations were evaluated and various software optimizations was applied to improve the network simulation performance. The proposed NoC simulator presents more than 100KCycles/s performance unless the packet injection rate exceeds 0.00625, which is two times faster than state-of-the-arts NoC simulator at least. The speed of the simulation backplane depends greatly on the IPC overhead and SystemC scheduling overhead. To reduce the IPC overhead, the trace-driven co-simulation technique is used, faster IPC is introduced, and the segmented L1 data cache is embedded in a core simulator. In addition, to reduce SystemC scheduling overhead, it is important to reduce the number of modules that are simultaneously awakened. To this end, slave modules are redesigned to be activated only based on an event. A new scheduler parallelization technique is also studied. Although the newly developed SystemC parallel scheduler showed good performance under limited conditions, we also confirmed that no performance improvement was found in the TLM level many-core simulator developed in this thesis. While the proposed many-core simulator uses the conservative synchronization technique which is free from causality errors and performs an accurate flit-level NoC simulation, the simulation performance is still acceptable, thanks to parallelism and optimizations. Additionally, the simulator is highly scalable to add other modules because the simulation backplane is developed to be compatible with SystemC TLM 2.0 standard. Although extensive experiments on accuracy are not conducted, it will be complemented when a detailed specification of the target architecture is given. This dissertation can be a reference to the development of a many-core simulator, which will be more essential in the future.Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Contribution 4 1.3 Dissertation Organization 5 Chapter 2 Background and Existing Research 6 2.1 Terminologies 6 2.1.1 Simulation Host / Simulation Target 6 2.1.2 Simulated Time / Simulation Time 2.1.3 User-level Simulation / Full-system Simulation 7 2.1.4 Execution-driven Simulation / Trace-driven Simulation 7 2.2 State-of-the-arts Many-core Simulators 8 2.2.1 Gem5 8 2.2.2 Marss 9 2.2.3 Sniper 9 2.2.4 Zsim 9 2.2.5 Manifold 10 2.2.6 Hornet 10 2.2.7 Summary 11 2.3 Host and Target Architecture 12 Chapter 3 Core Simulation 14 3.1 Overview 14 3.2 Related Works 16 3.2.1 Timing Models 16 3.2.2 Analytical Model: Interval Simulation 19 3.3 Sampling Mechanism 23 3.3.1 Sampling Configuration 24 3.3.2 Parameter Extraction 24 3.4 Trace Analyzer 27 3.4.1 Dependency Analysis 29 3.4.2 Life Cycle of An Instruction 31 3.5 Experimental Results 32 3.5.1 Time-accuracy Trade-off 34 3.5.2 Simulation Accuracy 37 3.5.3 Simulation Performance 41 3.6 Discussion 42 Chapter 4 NoC Simulation 45 4.1 Network-on-chip 45 4.2 Motivation 46 4.3 Related Works 48 4.3.1 Noxim 49 4.3.2 Booksim2 50 4.3.3 Garnet 51 4.4 Proposed Approach 51 4.4.1 Implementations 51 4.4.2 Optimizations 54 4.5 Experimental Results 56 4.5.1 Impact of Implementations and Optimizations 56 4.5.2 Comparison with Other State-Of-The-Arts 58 4.5.3 Performance Evaluation For Various Configurations 59 4.5.4 Full-System Simulation Accuracy Impact 59 4.5.5 Accuracy 61 4.6 Discussion 61 Chapter 5 Simulation Backplane 63 5.1 Overview 63 5.2 Background 65 5.2.1 SystemC 65 5.2.2 OSCI Transaction Level Modeling Standard 2.0 66 5.2.3 Synchronization Techniques 67 5.3 SystemC Models for the Target Architecture 69 5.4 Reducing the Cost of Interprocess Communications 71 5.4.1 Trace-driven Co-simulation 71 5.4.2 Better Interprocess Communication 73 5.4.3 Virtually embedding modules to core simulator 74 5.5 Reducing SystemC Scheduling Overhead 76 5.5.1 Event-based Slave Module Activation 76 5.5.2 SystemC Scheduler Parallelization 78 5.6 Evaluation 79 5.6.1 Scalability Test 79 5.6.2 Simulation Performance 79 5.6.3 Simulation Accuracy 80 Chapter 6 Simulation Backplane Parallelization 81 6.1 Background: OSCI SystemC Scheduler 81 6.2 Related Work: SystemC Parallelization Techniques 82 6.2.1 Fully-synchronous Approach 82 6.2.2 Parallel Distributed Event Scheduling (PDES) Approach 82 6.2.3 Out-of-order Execution with Dependency Analysis 83 6.2.4 Dynamic Offloading Approach 84 6.3 Proposed Technique 84 6.3.1 Basic Synchronization 85 6.3.2 Relaxed Synchronization 86 6.3.3 Modeling Restrictions 88 6.4 Experimental Results 89 6.4.1 Performance 90 6.4.2 Accuracy 92 6.5 Discussion and Limitation 93 Chapter 7 Conclusion 95 Bibliography 97 요약 107Docto

    A Model-based Design Framework for Application-specific Heterogeneous Systems

    Get PDF
    The increasing heterogeneity of computing systems enables higher performance and power efficiency. However, these improvements come at the cost of increasing the overall complexity of designing such systems. These complexities include constructing implementations for various types of processors, setting up and configuring communication protocols, and efficiently scheduling the computational work. The process for developing such systems is iterative and time consuming, with no well-defined performance goal. Current performance estimation approaches use source code implementations that require experienced developers and time to produce. We present a framework to aid in the design of heterogeneous systems and the performance tuning of applications. Our framework supports system construction: integrating custom hardware accelerators with existing cores into processors, integrating processors into cohesive systems, and mapping computations to processors to achieve overall application performance and efficient hardware usage. It also facilitates effective design space exploration using processor models (for both existing and future processors) that do not require source code implementations to estimate performance. We evaluate our framework using a variety of applications and implement them in systems ranging from low power embedded systems-on-chip (SoC) to high performance systems consisting of commercial-off-the-shelf (COTS) components. We show how the design process is improved, reducing the number of design iterations and unnecessary source code development ultimately leading to higher performing efficient systems

    Fast, Scalable, and Flexible C++ Hardware Architectures for Network Data Plane Queuing and Traffic Management

    Get PDF
    Les réseaux de communication actuels et ceux de la prochaine génération font face à des défis majeurs en matière de commutation et de routage de paquets dans le contexte des réseaux définis par logiciel (SDN–Software Defined Networking), de l’augmentation du trafic provenant des diffèrents utilisateurs et dispositifs connectés, et des exigences strictes de faible latence et haut débit. Avec la prochaine technologie de communication, la cinquième génération (5G) permettra des applications et services indispensables tels que l’automatisation des usines, les systèmes de transport intelligents, les réseaux d’énergie intelligents, la chirurgie à distance, la réalité virtuelle/augmentée, etc. Ces services et applications exigent des performances très strictes en matière de latence, de l’ordre de 1 ms, et des débits de données atteignant 1 Gb/s. Tout le trafic Internet transitant dans le réseau est traité par le plan de données, aussi appelé traitement associé au chemin dit d’accès rapide. Le plan de données contient des dispositifs et équipements qui gèrent le transfert et le traitement des différents trafics. La hausse de la demande en bande passante Internet a accru le besoin de processeurs plus puissants, spécialement conçus pour le traitement rapide des paquets, à savoir les unités de traitement réseau ou les processeurs de réseau (NPU–Network Processing Units). Les NPU sont des dispositifs de réseau pouvant couvrir l’ensemble du modèle d’interconnexion de systèmes ouverts (OSI–Open Systems Interconnect) en raison de leurs capacités d’opération haute vitesse et de fonctionnalités traitant des millions de paquets par seconde. Compte tenu des besoins en matière de haut débit sur les réseaux actuels, les NPU doivent accélérer les fonctionnalités de traitement des paquets afin d’atteindre les requis des réseaux actuels et pour pouvoir supporter la nouvelle génération 5G. Les NPU fournissent divers ensembles de fonctionnalités, depuis l’analyse, la classification, la mise en file d’attente, la gestion du trafic et la mise en mémoire tampon du trafic réseau.----------ABSTRACT: Current and next generation networks are facing major challenges in packet switching and routing in the context of software defined networking (SDN), with a significant increase of traffic from different connected users and devices, and tight requirements of high-speed networking devices with high throughput and low latency. The network trend with the upcoming fifth generation communication technology (5G) is such that it would enable some desired applications and services such as factory automation, intelligent transportation systems, smart grid, health care remote surgery, virtual/augmented reality, etc. that require stringent performance of latency in the order of 1 ms and data rates as high as 1 Gb/s. All traffic passing through the network is handled by the data plane, that is called the fast path processing. The data plane contains networking devices that handles the forwarding and processing of the different traffic. The request for higher Internet bandwidth has increased the need for more powerful processors, specifically designed for fast packet processing, namely the network processors or the network processing units (NPUs). NPUs are networking devices which can span the entire open systems interconnection (OSI) model due to their high-speed capabilities while processing millions of packets per second. With the high-speed requirement in today’s networks, NPUs must accelerate packet processing functionalities to meet the required throughput and latency in today’s networks, and to best support the upcoming next generation networks. NPUs provide various sets of functionalities, from parsing, classification, queuing, traffic management, and buffering of the network traffic. In this thesis, we are mainly interested in the queuing and traffic management functionalities in the context of high-speed networking devices. These functionalities are essential to provide and guarantee quality of service (QoS) to various traffic, especially in the network data plane of NPUs, routers, switches, etc., and may represent a bottleneck as they reside in the critical path. We present new architecures for NPUs and networking equipment. These functionalities are integrated as external accelerators that can be used to speed up the operation through field-programmable gate arrays (FPGAs). Also, we aim to propose a high-level coding style that can be used to solve the optimization problems related to the different requirement in networking that are parallel processing, pipelined operation, minimizing memory access latency, etc., leading to faster time-to-market and lower development efforts from high-level design in comparison to hand-written hardware description language (HDL) coded designs

    Untersuchungen zur Kostenoptimierung für Hardware-Emulatoren durch Anwendung von Methoden der partiellen Laufzeitrekonfiguration

    Get PDF
    Der vorliegende Band der wissenschaftlichen Schriftenreihe Eingebettete Selbstorganisierende Systeme widmet sich der Optimierung von Hardware Emulatoren durch die Anwendung von Methoden der partiellen Laufzeitrekonfiguration. An aktuelle Schaltkreis- und Systementwürfe werden zunehmend divergente Anforderungen gestellt. Einer sehr kurzen Entwicklungszeit für eine schnelle Markteinführung steht, um teure und aufwändige Re-Desings zu verhindern, eine möglichst umfangreiche Testabdeckung des Entwurfs gegenüber. Um die Zeit für die Tests zu reduzieren, kommen überwiegend FPGA-basierte HW-Emulatoren zum Einsatz. Durch den Einfluss der steigenden Komplexität aktueller Entwürfe auf die Emulator-Plattform reduziert sich jedoch signifikant die Performance der Emulatoren. Die in Emulatoren eingesetzten FPGAs sind aber zunehmend partiell zur Laufzeit rekonfigurierbar. Der in der vorliegenden Arbeit umgesetzte Ansatz behandelt die Anwendung von Methoden der Laufzeitrekonfiguration auf dem Gebiet der Hardware-Emulation. Dafür ist zunächst eine Partitionierung des zu testenden Entwurfs in möglichst funktional unabhängige Systemteile notwendig. Für eine optimierte und ressourceneffiziente Platzierung der einzelnen HW-Module während der Emulation, ist ein ebenfalls auf dem FPGA platziertes Kommunikationsnetzwerk implementiert. Der vorgestellte Ansatz wird an verschiedenen Beispielen anschaulich illustriert. So kann der Leser die Mächtigkeit der entwickelten Methodik nachvollziehen und wird motiviert, das Verfahren auch auf weitere Anwendungsfälle zu übertragen.Current circuit and system designs consist a lot of gate numbers and divergent requirements. In contrast to a short development and time to market schedule, the needs for perfect test coverage and quality are rising. One approach to cover this problem is the FPGA based functional test of electronic circuits. State of the art FPGA platforms doesn't consist enough gates to support fully custom designs. The thesis catches this problem and gives some approaches to use partial dynamic reconfiguration to solve the size problem. A fully automated design flow demonstrates partial partitioning of designs, modifications to use dynamic reconfiguration and its schedule. At the end of the work, some examples demonstrates the power of the approach

    Hierarchical Transactions for Hardware/Software Cosynthesis

    Get PDF
    Modern heterogeneous devices provide of a variety of computationally diverse components holding tremendous performance and power capability. Hardware-software cosynthesis offers system-level synthesis and optimization opportunities to realize the potential of these evolving architectures. Efficiently coordinating high-throughput data to make use of available computational resources requires a myriad of distributed local memories, caching structures, and data motion resources. In fact, storage, caching, and data transfer components comprise the majority of silicon real estate. Conventional automated approaches, unfortunately, do not effectively represent applications in a way that captures data motion and state management which dictate dominant system costs. Consequently, existing cosynthesis methods suffer from poor utility of computational resources. Automated cosynthesis tailored towards memory-centric optimizations can address the challenge, adapting partitioning, scheduling, mapping, and binding techniques to maximize overall system utility.This research presents a novel hierarchical transaction model that formalizes state and control management through an abstract data/control encapsulation semantic. It is designed from the ground-up to enable efficient synthesis across heterogeneous system components, with an emphasis on memory capacity constraints. It intrinsically encourages a high degree of concurrency and latency tolerance, and provides verification tools to ensure correctness. A unique data/execution hierarchical encapsulation framework guarantees scalable analysis, supporting a novel concept of state and control mobility. A front-end language allows concise expression of designer intent, and is structured with synthesis in mind. Designers express families of valid executions in a minimal format through high-level dependencies, type systems, and computational relationships, allowing synthesis tools to manage lower-level details. This dissertation introduces and exercises the model, discussing language construction, demonstrating control and data-dominated applications, and presenting a synthesis path that exhibits near-linear scalability with problem size

    Online scheduling for real-time multitasking on reconfigurable hardware devices

    Get PDF
    Nowadays the ever increasing algorithmic complexity of embedded applications requires the designers to turn towards heterogeneous and highly integrated systems denoted as SoC (System-on-a-Chip). These architectures may embed CPU-based processors, dedicated datapaths as well as recon gurable units. However, embedded SoCs are submitted to stringent requirements in terms of speed, size, cost, power consumption, throughput, etc. Therefore, new computing paradigms are required to ful l the constraints of the applications and the requirements of the architecture. Recon gurable Computing is a promising paradigm that provides probably the best trade-o between these requirements and constraints. Dynamically recon gurable architectures are their key enabling technology. They enable the hardware to adapt to the application at runtime. However, these architectures raise new challenges in SoC design. For example, on one hand, designing a system that takes advantage of dynamic recon guration is still very time consuming because of the lack of design methodologies and tools. On the other hand, scheduling hardware tasks di ers from classical software tasks scheduling on microprocessor or multiprocessors systems, as it bears a further complicated placement problem. This thesis deals with the problem of scheduling online real-time hardware tasks on Dynamically Recon gurable Hardware Devices (DRHWs). The problem is addressed from two angles : (i) Investigating novel algorithms for online real-time scheduling/placement on DRHWs. (ii) Scheduling/Placement algorithms library for RTOS-driven Design Space Exploration (DSE). Regarding the first point, the thesis proposes two main runtime-aware scheduling and placement techniques and assesses their suitability for online real-time scenarios. The first technique discusses the impact of synthesizing, at design time, several shapes and/or sizes per hardware task (denoted as multi-shape task), in order to ease the online scheduling process. The second technique combines a looking-ahead scheduling approach with a slots-based recon gurable areas management that relies on a 1D placement. The results show that in both techniques, the scheduling and placement quality is improved without signi cantly increasing the algorithm time complexity. Regarding the second point, in the process of designing SoCs embedding recon gurable parts, new design paradigms tend to explore and validate as early as possible, at system level, the architectural design space. Therefore, the RTOS (Real-Time Operating System) services that manage the recon gurable parts of the SoC can be re fined. In such a context, gathering numerous hardware tasks scheduling and placement algorithms of various complexity vs performance trade-o s in a kind of library is required. In this thesis, proposed algorithms in addition to some existing ones are purposely implemented in C++ language, in order to insure the compatibility with any C++/SystemC based SoC design methodology.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Evaluation of Robust Deep Learning Pipelines Targeting Low SWaP Edge Deployment

    Get PDF
    The deep learning technique of convolutional neural networks (CNNs) has greatly advanced the state-of-the-art for computer vision tasks such as image classification and object detection. These solutions rely on large systems leveraging wattage-hungry GPUs to provide the computational power to achieve such performance. However, the size, weight and power (SWaP) requirements of these conventional GPU-based deep learning systems are not suitable when a solution requires deployment to so called Edge environments such as autonomous vehicles, unmanned aerial vehicles (UAVs) and smart security cameras. The objective of this work is to benchmark FPGA-based alternatives to conventional GPU systems that have the potential to offer similar CNN inference performance while being delivered in a low SWaP platform suitable for Edge deployment. In this thesis we create equivalent pipelines for both GPU and FPGA which implement deep learning models for both image classification and object detection tasks. Beyond baseline benchmarking, we additionally quantify the impact on inference performance of two common real-world image degradation scenarios (simulated contrast reduced capture and salt-and-pepper sensor noise) and their associated correction methods (gamma correction and median kernel filtering) we selected as illustrative examples. The baseline system analysis, coupled with these additional robustness evaluations, provides a statistically significant benchmark comparison targeting a breadth of interest for the computer vision community. We have conducted the following experiments to demonstrate the FPGA as an effective alternative to the GPU implementation when deployed to Edge environments: (1) we developed a hardware video processing architecture with an associated library of hardware processing functions to prototype a base FPGA ecosystem, (2) we established through benchmarking that two common CNN models (ResNet-50 and YOLO version 3) have a mere 1\% drop in performance on FPGA versus GPU, (3) we show a quantitative baseline analysis for the image degradation/correction on the associated testing datasets, and (4) we proved that our FPGA-based computer vision system is an ideal platform for Edge deployment given its comparable robustness to input degradation when optimal correction is applied. The significance of these findings is the demonstration of our FPGA-based solution as the superior candidate for Edge deployed vision systems evidenced by our experiments which illustrate its competitive inference performance to the conventional GPU solution and its equivalent robustness provided by correction methods to noise encountered during in-the-wild imaging while being delivered with far lower SWaP requirements
    corecore