74 research outputs found

    Virtual Cycle-accurate Hardware and Software Co-simulation Platform for Cellular IoT

    Get PDF
    Modern embedded development flows often depend on FPGA board usage for pre-ASIC system verification. The purpose of this project is to instead explore the usage of Electronic System Level (ESL) hardware-software co-simulation through the usage of ARM SoC Designer tool to create a virtual prototype of a cellular IoT modem and thereafter compare the benefits of including such a methodology into the early development cycle. The virtual system is completely developed and executed on a host computer, without the requirement of additional hardware. The virtual prototype hardware is based on C++ ARM verified cycle-accurate models generated from RTL hardware descriptions, High-level synthesis (HLS) pre-synthesis SystemC HW accelerator models and behavioural models which implement the ARM Cycle-accurate Simulation Interface (CASI). The micro-controller of the virtual system which is based on an ARM Cortex-M processor, is capable of executing instructions from a memory module. This report documents the virtual prototype implementation and compares both the software performance and cycle-accuracy of various virtual micro-controller configurations to a commercial reference development board. By altering factors such as memory latencies and bus interconnect subsystem arbitration in co-simulations, the software cycle-count performance of the development board was shown possible to reproduce within a 5% error margin, at the cost of approximately 266 times slower execution speed. Furthermore, the validity of two HLS pre-synthesis hardware models is investigated and proven to be functionally accurate within three clock cycles of individual block latency compared to post-synthesis FPGA synthesized implementations. The final virtual prototype system consisted of the micro-controller and two cellular IoT hardware accelerators. The system runs a FreeRTOS 9.0.0 port, executing a multi-threaded program at an average clock cycle simulation frequency of 10.6 kHz.-Designing and simulating embedded computer systems virtually. Cellular internet of things (IoT) is a new technology that will enable the interconnection of everything: from street lights and parking meters to your gas or water meter at home, wireless cellular networks will allow information to be shared between devices. However, in order for these systems to provide any useful data, they need to include a computer chip with a system to manage the communication itself, enabling the connection to a cellular network and the actual transmission and reception of data. Such a chip is called an embedded chip or system. Traditionally, the design and verification of digital embedded systems, that is to say a system which has both hardware and software components, had to be done in two steps. The first step consists of designing all the hardware, testing it, integrating it and producing it physically on silicon in order to verify the intended functionality of all the components. The second step thus consists of taking the hardware that has been developed and designing the software: a program which will have to execute in complete compliance to the hardware that has been previously developed. This poses two main issues: the software engineers cannot begin their work properly until the hardware is finished, which makes the process very long, and the fact that the hardware has been printed on silicon greatly restricts the possibility of doing changes to accommodate late system requirement alterations; which is quite likely for a tailor-made application specific system such as a cellular IoT chip. A currently widespread technology used to mitigate the previously mentioned negative aspects of embedded design, is the employment of field-programmable gate array (FPGA) development boards which often contain a micro-controller (with a processor and some memories), and a gate array connected to it. The FPGA part consists of a lattice of digital logic gates which can be programmed to interconnect and represent the functionality of the hardware being designed. The processor can thus execute software instructions placed on the memories and the hardware being developed can be programmed into the gate array in order to integrate and verify a full hardware and software system. Nevertheless, this boards are expensive and limit the design to the hardware components available commercially in the different off-the-shelf models, e.g. a specific processor which might not be the desired one. Now imagine there is a way to design hardware components such as processors in the traditional way, however once the hardware has been implemented it can be integrated together with software without the need of printing a physical silicon chip specifically for this purpose. That would be extremely convenient and would save lots of time, would it not? Fortunately, this is already possible due to Electronic System Level (ESL) design, which is compilation of techniques that allow to design, simulate and partially verify a digital chip, all within any normal laptop or desktop computer. Moreover, some ESL tools such as the one investigated in this project, allow you to even simulate a program code written specifically for this hardware; this is known as virtual hardware software co-simulation. The reliability of simulation must however be considered when compared to a traditional two-step methodology or FPGA board usage to verify a full system. This is because a virtual hardware simulation can have several degrees of accuracy, depending on the specificity of component models that make up the virtual prototype of the digital system. Therefore, in order to use co-simulation techniques with a high degree of confidence for verification, the highest accuracy degree should be employed if possible to guarantee that what is being simulated will match the reality of a silicon implementation. The clock cycle-accurate level is one of the highest accuracy system simulation methods available, and it consists of representing the digital states of all hardware components such as signals and registers, in a cycle-by-cycle manner. By using the ARM SoC Designer ESL tool, we have co-designed and co-simulated several microcontrollers on a detailed, cycle-accurate level and confirmed its behaviour by comparing it to a physical reference target development board. Finally, a more complex virtual prototype of a cellular IoT system was also simulated, including a micro-controller running a a real-time operating system (RTOS), hardware accelerators and serial data interfacing. Parts of this virtual prototype where compared to an FPGA board to evaluate the pros and cons of incorporating virtual system simulation into the development cycle and to what extent can ESL methods substitute traditional verification techniques. The ease of interchanging hardware, simplicity of development, simulation speed and the level of debug capabilities available when developing in a virtual environment are some of the aspects of ARM SoC Designer discussed in this thesis. A more in depth description of the methodology and results can be found in the report titled "Virtual Cycle-accurate Hardware and Software Co-simulation Platform for Cellular IoT"

    Design and Verification of a DFI-AXI DDR4 Memory PHY Bridge Suitable for FPGA Based RTL Emulation and Prototyping

    Get PDF
    System on chip (SoC) designers today are emphasizing on a process which can ensure robust silicon at the first tape-out. Given the complexity of modern SoC chips, there is compelling need to have suitable run time software, such at the Linux kernel and necessary drivers available once prototype silicon is available. Emulation and FPGA prototyping systems are exemplary platforms to run the tests for designs, are naturally efficient and perform well, and enable early software development. While useful, one needs to keep in mind that emulation and FPGA prototyping systems do not run at full silicon speed. In fact, the SoC target ported to the FPGA might achieve a clock speed less than 10 MHz. While still very useful for testing and software development, this low operating speed creates challenges for connecting to external devices such as DDR SDRAM. In this paper, the DDR-PHY INTERFACE (DFI) to Advanced eXtensible Interface (AXI) Bridge is designed to support a DDR4 memory sub-system design. This bridge module is developed based on the DDR PHY Interface version 5.0 specification, and once implemented in an FPGA, it transfers command information and data between the SoC DDR Memory controller being prototypes, across the AXI bus to an FPGA specific memory controller connected to a DDR SDRAM or other physical memory external to the FPGA. This bridge module enables multi-communication with the design under test (DUT) with a synthesizable SCE-MI based infrastructure between the bridge and logic simulator. SCE-MI provides a direct mechanism to inject the specific traffic, and monitor performance of the DFI-AXI DDR4 Memory PHY Bridge. Both Emulation and FPGA prototyping platforms can use this design and its testbench

    Reusing RTL assertion checkers for verification of SystemC TLM models

    Get PDF
    The recent trend towards system-level design gives rise to new challenges for reusing existing RTL intellectual properties (IPs) and their verification environment in TLM. While techniques and tools to abstract RTL IPs into TLM models have begun to appear, the problem of reusing, at TLM, a verification environment originally developed for an RTL IP is still under-explored, particularly when ABV is adopted. Some frameworks have been proposed to deal with ABV at TLM, but they assume a top-down design and verification flow, where assertions are defined ex-novo at TLM level. In contrast, the reuse of existing assertions in an RTL-to-TLM bottom-up design flow has not been analyzed yet, except by using transactors to create a mixed simulation between the TLM design and the RTL checkers corresponding to the assertions. However, the use of transactors may lead to longer verification time due to the need of developing and verifying the transactors themselves. Moreover, the simulation time is negatively affected by the presence of transactors, which slow down the simulation at the speed of the slowest parts (i.e., RTL checkers). This article proposes an alternative methodology that does not require transactors for reusing assertions, originally defined for a given RTL IP, in order to verify the corresponding TLM model. Experimental results have been conducted on benchmarks with different characteristics and complexity to show the applicability and the efficacy of the proposed methodology

    VERIFICATION AND DEBUG TECHNIQUES FOR INTEGRATED CIRCUIT DESIGNS

    Get PDF
    Verification and debug of integrated circuits for embedded applications has grown in importance as the complexity in function has increased dramatically over time. Various modeling and debugging techniques have been developed to overcome the overwhelming challenge. This thesis attempts to address verification and debug methods by presenting an accurate C model at the bit and algorithm level coupled with an implemented Hardware Description Language (HDL). Key concepts such as common signal and variable naming conventions are incorporated as well as a stepping function within the implemented HDL. Additionally, a common interface between low-level drivers and C models is presented for early firmware development and system debug. Finally, selfchecking verification is discussed for delivering multiple test cases along with testbench portability

    Automated Hardware Prototyping for 3D Network on Chips

    Get PDF
    Vor mehr als 50 Jahren stellte Intel® Mitbegründer Gordon Moore eine Prognose zum Entwicklungsprozess der Transistortechnologie auf. Er prognostizierte, dass sich die Zahl der Transistoren in integrierten Schaltungen alle zwei Jahre verdoppeln wird. Seine Aussage ist immer noch gültig, aber ein Ende von Moores Gesetz ist in Sicht. Mit dem Ende von Moore’s Gesetz müssen neue Aspekte untersucht werden, um weiterhin die Leistung von integrierten Schaltungen zu steigern. Zwei mögliche Ansätze für "More than Moore” sind 3D-Integrationsverfahren und heterogene Systeme. Gleichzeitig entwickelt sich ein Trend hin zu Multi-Core Prozessoren, basierend auf Networks on chips (NoCs). Neben dem Ende des Mooreschen Gesetzes ergeben sich bei immer kleiner werdenden Technologiegrößen, vor allem jenseits der 60 nm, neue Herausforderungen. Eine Schwierigkeit ist die Wärmeableitung in großskalierten integrierten Schaltkreisen und die daraus resultierende Überhitzung des Chips. Um diesem Problem in modernen Multi-Core Architekturen zu begegnen, muss auch die Verlustleistung der Netzwerkressourcen stark reduziert werden. Diese Arbeit umfasst eine durch Hardware gesteuerte Kombination aus Frequenzskalierung und Power Gating für 3D On-Chip Netzwerke, einschließlich eines FPGA Prototypen. Dafür wurde ein Takt-synchrones 2D Netzwerk auf ein dreidimensionales asynchrones Netzwerk mit mehreren Frequenzbereichen erweitert. Zusätzlich wurde ein skalierbares Online-Power-Management System mit geringem Ressourcenaufwand entwickelt. Die Verifikation neuer Hardwarekomponenten ist einer der zeitaufwendigsten Schritte im Entwicklungsprozess hochintegrierter digitaler Schaltkreise. Um diese Aufgabe zu beschleunigen und um eine parallele Softwareentwicklung zu ermöglichen, wurde im Rahmen dieser Arbeit ein automatisiertes und benutzerfreundliches Tool für den Entwurf neuer Hardware Projekte entwickelt. Eine grafische Benutzeroberfläche zum Erstellen des gesamten Designablaufs, vom Erstellen der Architektur, Parameter Deklaration, Simulation, Synthese und Test ist Teil dieses Werkzeugs. Zudem stellt die Größe der Architektur für die Erstellung eines Prototypen eine besondere Herausforderung dar. Frühere Arbeiten haben es versäumt, eine schnelles und unkompliziertes Prototyping, insbesondere von Architekturen mit mehr als 50 Prozessorkernen, zu realisieren. Diese Arbeit umfasst eine Design Space Exploration und FPGA-basierte Prototypen von verschiedenen 3D-NoC Implementierungen mit mehr als 80 Prozessoren

    A Middleware-centric design methodology for networked embedded systems

    Get PDF
    Negli ultimi anni \ue8 incrementato considerevolmente l\u2019interesse verso gli \u201dambient intelligence\u201d, sistemi informatici, tipicamente composti da \u201dnetworked embedded systems\u201d (Wirelesse sensor networks, mobile terminal, PDA, etc.) i quali sono integrati nell\u2019ambiente umano con l\u2019obiettivo di migliorare la qualit`a della vita nel modo pi naturale possibile. Una applicazione \u201dNetworked Embedded System\u201d (NES) \ue8 un\u2019applicazione distribuita, eseguita su piattaforme HW/SW eterogenee che interagiscono attraverso differenti canali di comunicazione. Generalmente queste applicazioni per dispositivi NES vengono sviluppate senza il supporto di software di sistema, obbligando il progettista a modellare queste applicazioni utilizzando direttamente le primitive fornite dal sistema operativo embedded o le API dei drivers dei dispositivi HW. Con questa metodologia di progettazione, le applicazioni per sistemi embedded vengono realizzate ad-hoc per la piattaforma HW/SW, risultando essere n\ue9 portabili, n\ue9 scalabili e di conseguenza particolarmente costose. A causa di questi problemi, negli ultimi anni si \ue8 adottato un flusso di progettazione che prevede l\u2019introduzione di un \u201dservice layer\u201d chiamato middleware, il quale astrae le peculiarit del sistema operativo e dei componenti HW, semplificando la progettazione di queste applicazioni embedded. Allo stato dell\u2019arte sono presenti molte implmentazioni di middleware con differenti paradigmi di programmazione, e la scelta di quale utilizzare per progettazione una applicazione NES basata sui seguenti criteri: \u2022 abilit`a/conoscenza/capacit`a di programmazione da parte del progettista; \u2022 piattaforma HW/SW disponinile. Gli svantaggi di questo approccio sono un pi\uf9 complesso flusso di progettazione legato alla mancanza di portabilit\ue0 della stessa applicazione su differenti dispositivi embedded. Questo significa che la presenza del middleware non \ue8 mai stata introdotta nel flusso di progettazione come una esplicita dimensione di progetto. Obiettivo della tesi di dottorato \ue8 lo studio e la realizzazione di un flusso di progettazione per applicazioni per NES, dove il middleware \ue8 una dimensione di progetto, diventando una variabile di progetto come lo sono il software e lo hardware. Questo punto \ue8 ottenuto risolvendo tre problemi: \u2022 Fornire un modello di middleware astratto il quale pu\uf2 essere usato come componente del flusso di progettazione; questo middleware astratto, chiamato Abstract Middleware Services (AMS), fornisce un insieme di servizi astratti basati su differenti paradigmi di programmazione dei middleware reali. Utilizzando questo modello di middleware stratto, il progettista \ue8 facilitato nello sviluppo delle applicazioni per NES. \u2022 Fornire un ambiente di simulazione dove validare e simulare l\u2019intero modello realizzato dal progettista. \u2022 Fornire una metodologia automatica di traduzione da AMS ad un middleware reale, per poter eseguire l\u2019applicazione su una reale piattaforma HW/SW, dotata di un middleware qualsiasi. L\u2019attivit di dototrato ha permesso la definizione di un nuovo approccio di progettazione basato su un modello di middleware astratto che fornisce un ambiente per la modellazione e la validazione di applicazioni per Networked Embedded Systems, risolvendo i tre punti precedenti. Inoltre, al fine di produrre un efficiente ambiente di simulazione e modellazione, sono state analizzate le metodologie di co-simulazione hardware-software-network attualmente presenti in letteratura. L\u2019attivit\ue0 di dottorato inoltre parte integrante del progetto ANGEL finanziato dalla Comunit\ue0 Europea (IST-2005-33506 - Embedded Systems), il cui obiettivo \ue8 lo sviluppo di una piattaforma per la realizzazione di sistemi eterogenei nei quali Wireless Sensor Network (WSN) e tradizionali reti di comunicazioni cooperano per monitorare e migliorare la qualit\ue0 della vita in habitat comuni. Durante questa attivit\ue0 il flusso di progettazione che include anche il middleware come variabile di progetto, oggetto della tesi di dottorato, sar esemplificato su WSN e terminali mobili (per esempio cellulari) per far si che questi possano dialogare tra loro in modo intelligente.Ambient intelligence, pervasive and ubiquitous computing are the center of a great deal of attention because of their promise to bring benefits for end-users, higher revenues for manufacturers and new challenges for researchers. Typical computing technologies (such as telemedicine, manufacturing, crisis management) are part of a broader class of Networked Embedded Systems (NES) in which a large number of nodes are connected together and collaborate to perform a common task under a defined set of constraints. Therefore, the key aspects of these applications are their distributed nature and the presence of very limited HW resources, as in case of WSNs. Their wide adoption requires interoperability across different manufacturers, simplification of application development, simulation tools for functional validation and the fulfilment of tight HW/SW constraints. Interoperability is achieved through the use of standard protocol stacks (e.g., IEEE 802.15.1/Bluetooth and IEEE 802.15.4/Zig- Bee). Simplification of application development can be achieved through a service layer, named middleware, which abstracts from the peculiarities of the operating system and HW components. Traditionally, many NES applications have been developed without support from system software [1] excepts for device drivers and operating systems. State-of-the-art techniques [2] for NES focus on simple data-gathering applications, and in most cases, the design of the application and the system software are usually closely-coupled, or even combined as a monolithic procedure. Such applications are neither flexible nor scalable and they should be re-written if the platform changes. Middleware is emerging as an important architectural component in supporting NES applications able to facilitate the application development. The role of middleware is to present a unified programmingmodel to application designers and to mask out the problems of heterogeneity and distribution providing a basic set of tools and libraries for the low-level handling of technology-specific NES. Several NES middleware have been implemented in the past years each one providing different programming paradigms (e.g., Tuplespace, messageoriented, object-oriented, database, etc.) and differ with respect to ease to use, expressiveness, scalability and overhead. However, their diversity makes the development of high quality middleware-centric software systems complex: software engineering methods and tools should be developed with the use of middleware in mind. In such way, Sensation [xxx] presents a middleware platform solution for pervasive applications inWSN providing a developer-friendly programming interface. This approach is valid just for WSN and does not include a network simulator for an exhaustive network evaluation. Model Driven Architecture (MDA) tries to overcome this problem; MDA is a new way of writing specifications, based on a platform-independent model. A complete MDA specification consists of a platform-independent UML model, one or more platform-specific models, and interface definitions, each describing how the base model is implemented on a different middleware platform. The MDA focuses primarily on the functionality and behaviour of a distributed application or system, not on the technology in which it will be implemented. Furthermore,MDA does not directly provide a simulation environment. Simulation tools are used for validating the application: there is a range of NES simulators available that focus on the network itself. NS-2 is a pure network simulator tool, where the nodes are abstracted and do not run real codes or operating systems, but rather simple behavioral models or statistical traffic generators. The advantage of NS-2 is that scalability is excellent. TOSSIM is a platform-specific simulator environment for sensor networks based on TinyOs operating system. TOSSIM can compile unchanged TinyOS applications directly into its framework, which means that most of the codes written for TOSSIM can be directly used in TinyOS. TOSSIM is a specific simulator for TinyOS and Berkeley motes and cant be used for simulating a generic NES (e.g.,WSN). Finally, SIMICS is a commercial full-system simulator that can be used to simulate heterogenous networked and distributed systems. Complete SW stacks from real system can run on the simulator without any modification. Despite of these punctual contributions, the literature does not report a complete design methodology for NES applications integrating all such three aspects. Therefore, in order to fully support the applications of a great variety of users with different needs, a complete NES application modelling and simulation environment have to include two main components: \u2022 a simulator to validate and explore application functional behaviuor in a network simulated environment supporting interoperability between different implementation platforms and ensure scalability of the NES technology. \u2022 a middleware environemnt providing different programming paradigms. This Layer will serve as an abstraction layer hiding the different NES implementations peculiarities from end-user applications. The goal of this work is to present a middleware-centric design flow for NES, where the middleware plays a decisive role in the design process. The proposed methodology allows programmers to write NES applications by using the system description language named SystemC and the AbstractMiddleware Environment (AME) framework for fast simulation. This proposal has three main advantages: (1) It provides a set of abstract services supporting the programming paradigms of different actual middleware implementations in order to meet the skills of the designer. AME facilitates the NES design flow by providing a unified and developer-common interface concealing the peculiarities of the underlying NES where the simulation environment is modelled in order to simulate the NES applications taking in account hardware and network effects. (2) The application can be simulated at early stage of the design flow for functional validation. (3) automatic mapping of AME applications on the actual platform; this guarantees the correct trade-off between level of abstraction and efficiency of implementation. In the follow we classify the actual middleware approaches according to their programming paradigms; then the AMEcentric design flow is described and finally we report the experimental results

    Cross-Layer Rapid Prototyping and Synthesis of Application-Specific and Reconfigurable Many-accelerator Platforms

    Get PDF
    Technological advances of recent years laid the foundation consolidation of informatisationof society, impacting on economic, political, cultural and socialdimensions. At the peak of this realization, today, more and more everydaydevices are connected to the web, giving the term ”Internet of Things”. The futureholds the full connection and interaction of IT and communications systemsto the natural world, delimiting the transition to natural cyber systems and offeringmeta-services in the physical world, such as personalized medical care, autonomoustransportation, smart energy cities etc. . Outlining the necessities of this dynamicallyevolving market, computer engineers are required to implement computingplatforms that incorporate both increased systemic complexity and also cover awide range of meta-characteristics, such as the cost and design time, reliabilityand reuse, which are prescribed by a conflicting set of functional, technical andconstruction constraints. This thesis aims to address these design challenges bydeveloping methodologies and hardware/software co-design tools that enable therapid implementation and efficient synthesis of architectural solutions, which specifyoperating meta-features required by the modern market. Specifically, this thesispresents a) methodologies to accelerate the design flow for both reconfigurableand application-specific architectures, b) coarse-grain heterogeneous architecturaltemplates for processing and communication acceleration and c) efficient multiobjectivesynthesis techniques both at high abstraction level of programming andphysical silicon level.Regarding to the acceleration of the design flow, the proposed methodologyemploys virtual platforms in order to hide architectural details and drastically reducesimulation time. An extension of this framework introduces the systemicco-simulation using reconfigurable acceleration platforms as co-emulation intermediateplatforms. Thus, the development cycle of a hardware/software productis accelerated by moving from a vertical serial flow to a circular interactive loop.Moreover the simulation capabilities are enriched with efficient detection and correctiontechniques of design errors, as well as control methods of performancemetrics of the system according to the desired specifications, during all phasesof the system development. In orthogonal correlation with the aforementionedmethodological framework, a new architectural template is proposed, aiming atbridging the gap between design complexity and technological productivity usingspecialized hardware accelerators in heterogeneous systems-on-chip and networkon-chip platforms. It is presented a novel co-design methodology for the hardwareaccelerators and their respective programming software, including the tasks allocationto the available resources of the system/network. The introduced frameworkprovides implementation techniques for the accelerators, using either conventionalprogramming flows with hardware description language or abstract programmingmodel flows, using techniques from high-level synthesis. In any case, it is providedthe option of systemic measures optimization, such as the processing speed,the throughput, the reliability, the power consumption and the design silicon area.Finally, on addressing the increased complexity in design tools of reconfigurablesystems, there are proposed novel multi-objective optimization evolutionary algo-rithms which exploit the modern multicore processors and the coarse-grain natureof multithreaded programming environments (e.g. OpenMP) in order to reduce theplacement time, while by simultaneously grouping the applications based on theirintrinsic characteristics, the effectively explore the design space effectively.The efficiency of the proposed architectural templates, design tools and methodologyflows is evaluated in relation to the existing edge solutions with applicationsfrom typical computing domains, such as digital signal processing, multimedia andarithmetic complexity, as well as from systemic heterogeneous environments, suchas a computer vision system for autonomous robotic space navigation and manyacceleratorsystems for HPC and workstations/datacenters. The results strengthenthe belief of the author, that this thesis provides competitive expertise to addresscomplex modern - and projected future - design challenges.Οι τεχνολογικές εξελίξεις των τελευταίων ετών έθεσαν τα θεμέλια εδραίωσης της πληροφοριοποίησης της κοινωνίας, επιδρώντας σε οικονομικές,πολιτικές, πολιτιστικές και κοινωνικές διαστάσεις. Στο απόγειο αυτής τη ςπραγμάτωσης, σήμερα, ολοένα και περισσότερες καθημερινές συσκευές συνδέονται στο παγκόσμιο ιστό, αποδίδοντας τον όρο «Ίντερνετ των πραγμάτων».Το μέλλον επιφυλάσσει την πλήρη σύνδεση και αλληλεπίδραση των συστημάτων πληροφορικής και επικοινωνιών με τον φυσικό κόσμο, οριοθετώντας τη μετάβαση στα συστήματα φυσικού κυβερνοχώρου και προσφέροντας μεταυπηρεσίες στον φυσικό κόσμο όπως προσωποποιημένη ιατρική περίθαλψη, αυτόνομες μετακινήσεις, έξυπνες ενεργειακά πόλεις κ.α. . Σκιαγραφώντας τις ανάγκες αυτής της δυναμικά εξελισσόμενης αγοράς, οι μηχανικοί υπολογιστών καλούνται να υλοποιήσουν υπολογιστικές πλατφόρμες που αφενός ενσωματώνουν αυξημένη συστημική πολυπλοκότητα και αφετέρου καλύπτουν ένα ευρύ φάσμα μεταχαρακτηριστικών, όπως λ.χ. το κόστος σχεδιασμού, ο χρόνος σχεδιασμού, η αξιοπιστία και η επαναχρησιμοποίηση, τα οποία προδιαγράφονται από ένα αντικρουόμενο σύνολο λειτουργικών, τεχνολογικών και κατασκευαστικών περιορισμών. Η παρούσα διατριβή στοχεύει στην αντιμετώπιση των παραπάνω σχεδιαστικών προκλήσεων, μέσω της ανάπτυξης μεθοδολογιών και εργαλείων συνσχεδίασης υλικού/λογισμικού που επιτρέπουν την ταχεία υλοποίηση καθώς και την αποδοτική σύνθεση αρχιτεκτονικών λύσεων, οι οποίες προδιαγράφουν τα μετα-χαρακτηριστικά λειτουργίας που απαιτεί η σύγχρονη αγορά. Συγκεκριμένα, στα πλαίσια αυτής της διατριβής, παρουσιάζονται α) μεθοδολογίες επιτάχυνσης της ροής σχεδιασμού τόσο για επαναδιαμορφούμενες όσο και για εξειδικευμένες αρχιτεκτονικές, β) ετερογενή αδρομερή αρχιτεκτονικά πρότυπα επιτάχυνσης επεξεργασίας και επικοινωνίας και γ) αποδοτικές τεχνικές πολυκριτηριακής σύνθεσης τόσο σε υψηλό αφαιρετικό επίπεδο προγραμματισμού,όσο και σε φυσικό επίπεδο πυριτίου.Αναφορικά προς την επιτάχυνση της ροής σχεδιασμού, προτείνεται μια μεθοδολογία που χρησιμοποιεί εικονικές πλατφόρμες, οι οποίες αφαιρώντας τις αρχιτεκτονικές λεπτομέρειες καταφέρνουν να μειώσουν σημαντικά το χρόνο εξομοίωσης. Παράλληλα, εισηγείται η συστημική συν-εξομοίωση με τη χρήση επαναδιαμορφούμενων πλατφορμών, ως μέσων επιτάχυνσης. Με αυτόν τον τρόπο, ο κύκλος ανάπτυξης ενός προϊόντος υλικού, μετατεθειμένος από την κάθετη σειριακή ροή σε έναν κυκλικό αλληλεπιδραστικό βρόγχο, καθίσταται ταχύτερος, ενώ οι δυνατότητες προσομοίωσης εμπλουτίζονται με αποδοτικότερες μεθόδους εντοπισμού και διόρθωσης σχεδιαστικών σφαλμάτων, καθώς και μεθόδους ελέγχου των μετρικών απόδοσης του συστήματος σε σχέση με τις επιθυμητές προδιαγραφές, σε όλες τις φάσεις ανάπτυξης του συστήματος. Σε ορθογώνια συνάφεια με το προαναφερθέν μεθοδολογικό πλαίσιο, προτείνονται νέα αρχιτεκτονικά πρότυπα που στοχεύουν στη γεφύρωση του χάσματος μεταξύ της σχεδιαστικής πολυπλοκότητας και της τεχνολογικής παραγωγικότητας, με τη χρήση συστημάτων εξειδικευμένων επιταχυντών υλικού σε ετερογενή συστήματα-σε-ψηφίδα καθώς και δίκτυα-σε-ψηφίδα. Παρουσιάζεται κατάλληλη μεθοδολογία συν-σχεδίασης των επιταχυντών υλικού και του λογισμικού προκειμένου να αποφασισθεί η κατανομή των εργασιών στους διαθέσιμους πόρους του συστήματος/δικτύου. Το μεθοδολογικό πλαίσιο προβλέπει την υλοποίηση των επιταχυντών είτε με συμβατικές μεθόδους προγραμματισμού σε γλώσσα περιγραφής υλικού είτε με αφαιρετικό προγραμματιστικό μοντέλο με τη χρήση τεχνικών υψηλού επιπέδου σύνθεσης. Σε κάθε περίπτωση, δίδεται η δυνατότητα στο σχεδιαστή για βελτιστοποίηση συστημικών μετρικών, όπως η ταχύτητα επεξεργασίας, η ρυθμαπόδοση, η αξιοπιστία, η κατανάλωση ενέργειας και η επιφάνεια πυριτίου του σχεδιασμού. Τέλος, προκειμένου να αντιμετωπισθεί η αυξημένη πολυπλοκότητα στα σχεδιαστικά εργαλεία επαναδιαμορφούμενων συστημάτων, προτείνονται νέοι εξελικτικοί αλγόριθμοι πολυκριτηριακής βελτιστοποίησης, οι οποίοι εκμεταλλευόμενοι τους σύγχρονους πολυπύρηνους επεξεργαστές και την αδρομερή φύση των πολυνηματικών περιβαλλόντων προγραμματισμού (π.χ. OpenMP), μειώνουν το χρόνο επίλυσης του προβλήματος της τοποθέτησης των λογικών πόρων σε φυσικούς,ενώ ταυτόχρονα, ομαδοποιώντας τις εφαρμογές βάση των εγγενών χαρακτηριστικών τους, διερευνούν αποτελεσματικότερα το χώρο σχεδίασης.Η αποδοτικότητά των προτεινόμενων αρχιτεκτονικών προτύπων και μεθοδολογιών επαληθεύτηκε σε σχέση με τις υφιστάμενες λύσεις αιχμής τόσο σε αυτοτελής εφαρμογές, όπως η ψηφιακή επεξεργασία σήματος, τα πολυμέσα και τα προβλήματα αριθμητικής πολυπλοκότητας, καθώς και σε συστημικά ετερογενή περιβάλλοντα, όπως ένα σύστημα όρασης υπολογιστών για αυτόνομα διαστημικά ρομποτικά οχήματα και ένα σύστημα πολλαπλών επιταχυντών υλικού για σταθμούς εργασίας και κέντρα δεδομένων, στοχεύοντας εφαρμογές υψηλής υπολογιστικής απόδοσης (HPC). Τα αποτελέσματα ενισχύουν την πεποίθηση του γράφοντα, ότι η παρούσα διατριβή παρέχει ανταγωνιστική τεχνογνωσία για την αντιμετώπιση των πολύπλοκων σύγχρονων και προβλεπόμενα μελλοντικών σχεδιαστικών προκλήσεων
    corecore