65 research outputs found

    Modeling and Design of Digital Electronic Systems

    Get PDF
    The paper is concerned with the modern methodologies for holistic modeling of electronic systems enabling system-on-chip design. The method deals with the functional modeling of complete electronic systems using the behavioral features of Hardware Description Languages or high level languages then targeting programmable devices - mainly Field Programmable Gate Arrays (FPGAs) - for the rapid prototyping of digital electronic controllers. This approach offers major advantages such as: a unique modeling and evaluation environment for complete power systems, the same environment is used for the rapid prototyping of the digital controller, fast design development, short time to market, a CAD platform independent model, reusability of the model/design, generation of valuable IP, high level hardware/software partitioning of the design is enabled, Concurrent Engineering basic rules (unique EDA environment and common design database) are fulfilled. The recent evolution of such design methodologies is marked through references to case studies of electronic system modeling,simulation, controller design and implementation. Pointers for future trends / evolution of electronic design strategies and tools are given

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    A many-core software framework for embedded space computing

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 66-67).Space computing has long called for powerful yet power-efficient hardware for on-board computation. The emergence of many-core CPUs on a single die provides one potential solution. The development of processors like Maestro strives for the balance between computational power and energy efficiency. However, development in software has not kept up. Not a single dominant programming framework has emerged to allow developers to easily write applications to take advantage of the new multi-core paradigm. As a result, in NASA's technology roadmap, fault management, programmability, and energy management under the new multi-core paradigm have been listed as top challenges. The goal of this thesis is to develop a framework that streamlines programming for multi-core processors, in the form of a programming model and a C++ programming library. A 49-core Maestro Development Board (MDB) serves as the development and testing hardware platform. The framework's usability is tested through a software simulation of a vision-based crater recognition algorithm for a lunar lander. A parallel version of the algorithm is written under the framework and tested, and a performance gain of about 300%, using 21 Maestro cores, is observed over the RAD750. The uniqueness of this framework lies in the principle that task blocks, not CPU cores, are the fundamental abstraction for individual processes. Each task block is allocated an arbitrary number of CPUs, completes one task, and communicates with other task blocks through message passing. Fault tolerance, power management, and communication among task blocks are abstracted out so that programmers can concentrate on the implementation of the application. The resulting programming library provides developers with the right tools to design and test parallel programs, port serial versions of applications to their parallelized counterparts, and develop new parallel programs with ease.by Eugene Yu-Ting Sun.M. Eng

    A Company-led Methodology for the Specification of Product Design Capabilities in Small and Medium Sized Electronics Companies

    Get PDF
    It is the aim of the research reported in this thesis to improve the product design effectiveness of small and medium sized electronics companies in the United Kingdom. It does so by presenting a methodology for use by such firms which will enable them to specify product design capabilities which are resilient to changes in their respective business environments. The research has not, however, concerned itself with the details of particular electronics component technologies or with the advantages of various CAD or CAE products, although these are both important aspects of any design capability. Nor is it concerned with the implementation of the product design capability. The methodology, which represents a significant improvement on current practice, is a structured, company-driven approach which draws extensively upon the lessons of international design best practice. It uses well-proven tools and techniques to guide firms through the entire process of creating such capabilities - from the development of an appropriate Mission Statement to the identification of cost effective and appropriate design system solutions which can readily be translated into action plans for improvement. The work emphasises the importance of adopting a holistic, systems approach which acknowledges the interrelationship between the management of the design process, as well as its operational and supporting activities. The research has been structured around the experiences of companies which have implemented electronics design systems and which "own" the problem in question. Hence, a research strategy was adopted which was based upon a case study approach and upon the development of close collaborative links with two leading design automation tool vendor companies. Case study interviews were undertaken in 18 U.K. and European electronics companies and in 11 U.S., Japanese and Korean electronics firms. The work proceeded in two distinct phases. Firstly, the author participated with other researchers to jointly develop a functional specification of an electronics designers' toolset to support the process of product design in an integrated manufacturing environment. The first phase provided the context for Phase 2, the development of the AGILITY methodology for specifying product design capabilities which represents the author's individual contribution. The contribution to knowledge made by the research lies in the creation of a process methodology which, for the first time, will help U.K. electronics companies to define for themselves product design capabilities which are robust and which support their wider business objectives. No such methodology is currently available in a form which is both accessible and affordable to smaller firms. Furthermore, the author has uncovered no evidence of the existence of such a methodology even for use by large electronics firms. Validation of the methodology is subject to an ongoing process of feedback.Racal Redac Lt

    Survey of Template-Based Code Generation

    Full text link
    L'automatisation de la génération des artefacts textuels à partir des modèles est une étape critique dans l'Ingénierie Dirigée par les Modèles (IDM). C'est une transformation de modèles utile pour générer le code source, sérialiser les modèles dans de stockages persistents, générer les rapports ou encore la documentation. Parmi les différents paradigmes de transformation de modèle-au-texte, la génération de code basée sur les templates (TBCG) est la plus utilisée en IDM. La TBCG est une technique de génération qui produit du code à partir des spécifications de haut niveau appelées templates. Compte tenu de la diversité des outils et des approches, il est nécessaire de classifier et de comparer les techniques de TBCG existantes afin d'apporter un soutien approprié aux développeurs. L'objectif de ce mémoire est de mieux comprendre les caractéristiques des techniques de TBCG, identifier les tendances dans la recherche, et éxaminer l'importance du rôle de l'IDM par rapport à cette approche. J'évalue également l'expressivité, la performance et la mise à l'échelle des outils associés selon une série de modèles. Je propose une étude systématique de cartographie de la littérature qui décrit une intéressante vue d'ensemble de la TBCG et une étude comparitive des outils de la TBCG pour mieux guider les dévloppeurs dans leur choix. Cette étude montre que les outils basés sur les modèles offrent plus d'expressivité tandis que les outils basés sur le code sont les plus performants. Enfin, Xtend2 offre le meilleur compromis entre l'expressivité et la performance.A critical step in model-driven engineering (MDE) is the automatic synthesis of a textual artifact from models. This is a very useful model transformation to generate application code, to serialize the model in persistent storage, generate documentation or reports. Among the various model-to-text transformation paradigms, Template-Based Code Generation (TBCG) is the most popular in MDE. TBCG is a synthesis technique that produces code from high-level specifications, called templates. It is a popular technique in MDE given that they both emphasize abstraction and automation. Given the diversity of tools and approaches, it is necessary to classify and compare existing TBCG techniques to provide appropriate support to developers. The goal of this thesis is to better understand the characteristics of TBCG techniques, identify research trends, and assess the importance of the role of MDE in this code synthesis approach. We also evaluate the expressiveness, performance and scalability of the associated tools based on a range of models that implement critical patterns. To this end, we conduct a systematic mapping study of the literature that paints an interesting overview of TBCG and a comparative study on TBCG tools to better guide developers in their choices. This study shows that model-based tools offer more expressiveness whereas code-based tools performed much faster. Xtend2 offers the best compromise between the expressiveness and the performance

    SUDS : automatic parallelization for raw processors

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 177-181).A computer can never be too fast or too cheap. Computer systems pervade nearly every aspect of science, engineering, communications and commerce because they perform certain tasks at rates unachievable by any other kind of system built by humans. A computer system's throughput, however, is constrained by that system's ability to find concurrency. Given a particular target work load the computer architect's role is to design mechanisms to find and exploit the available concurrency in that work load. This thesis describes SUDS (Software Un-Do System), a compiler and runtime system that can automatically find and exploit the available concurrency of scalar operations in imperative programs with arbitrary unstructured and unpredictable control flow. The core compiler transformation that enables this is scalar queue conversion. Scalar queue conversion makes scalar renaming an explicit operation through a process similar to closure conversion, a technique traditionally used to compile functional languages. The scalar queue conversion compiler transformation is speculative, in the sense that it may introduce dynamic memory allocation operations into code that would not otherwise dynamically allocate memory. Thus, SUDS also includes a transactional runtime system that periodically checkpoints machine state, executes code speculatively, checks if the speculative execution produced results consistent with the original sequential program semantics, and then either commits or rolls back the speculative execution path. In addition to safely running scalar queue converted code, the SUDS runtime system safely permits threads to speculatively run in parallel and concurrently issue memory operations, even when the compiler is unable to prove that the reordered memory operations will always produce correct results.(cont.) Using this combination of compile time and runtime techniques, SUDS can find concurrency in programs where previous compiler based renaming techniques fail because the programs contain unstructured loops, and where Tomasulo's algorithm fails because it sequentializes mispredicted branches. Indeed, we describe three application programs, with unstructured control flow, where the prototype SUDS system, running in software on a Raw microprocessor, achieves speedups equivalent to, or better than, an idealized, and unrealizable, model of a hardware implementation of Tomasulo's algorithm.by Matthew Ian Frank.Ph.D

    Applications and Techniques for Fast Machine Learning in Science

    Get PDF
    In this community review report, we discuss applications and techniques for fast machine learning (ML) in science - the concept of integrating powerful ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs

    Evolvable Smartphone-Based Point-of-Care Systems For In-Vitro Diagnostics

    Get PDF
    Recent developments in the life-science -omics disciplines, together with advances in micro and nanoscale technologies offer unprecedented opportunities to tackle some of the major healthcare challenges of our time. Lab-on-Chip technologies coupled with smart-devices in particular, constitute key enablers for the decentralization of many in-vitro medical diagnostics applications to the point-of-care, supporting the advent of a preventive and personalized medicine. Although the technical feasibility and the potential of Lab-on-Chip/smart-device systems is repeatedly demonstrated, direct-to-consumer applications remain scarce. This thesis addresses this limitation. System evolvability is a key enabler to the adoption and long-lasting success of next generation point-of-care systems by favoring the integration of new technologies, streamlining the reengineering efforts for system upgrades and limiting the risk of premature system obsolescence. Among possible implementation strategies, platform-based design stands as a particularly suitable entry point. One necessary condition, is for change-absorbing and change-enabling mechanisms to be incorporated in the platform architecture at initial design-time. Important considerations arise as to where in Lab-on-Chip/smart-device platforms can these mechanisms be integrated, and how to implement them. Our investigation revolves around the silicon-nanowire biological field effect transistor, a promising biosensing technology for the detection of biological analytes at ultra low concentrations. We discuss extensively the sensitivity and instrumentation requirements set by the technology before we present the design and implementation of an evolvable smartphone-based platform capable of interfacing lab-on-chips embedding such sensors. We elaborate on the implementation of various architectural patterns throughout the platform and present how these facilitated the evolution of the system towards one accommodating for electrochemical sensing. Model-based development was undertaken throughout the engineering process. A formal SysML system model fed our evolvability assessment process. We introduce, in particular, a model-based methodology enabling the evaluation of modular scalability: the ability of a system to scale the current value of one of its specification by successively reengineering targeted system modules. The research work presented in this thesis provides a roadmap for the development of evolvable point-of-care systems, including those targeting direct-to-consumer applications. It extends from the early identification of anticipated change, to the assessment of the ability of a system to accommodate for these changes. Our research should thus interest industrials eager not only to disrupt, but also to last in a shifting socio-technical paradigm
    corecore