91 research outputs found

    Hardware/Software Codesign

    Get PDF
    The current state of the art technology in integrated circuits allows the incorporation of multiple processor cores and memory arrays, in addition to application specific hardware, on a single substrate. As silicon technology has become more advanced, allowing the implementation of more complex designs, systems have begun to incorporate considerable amounts of embedded software [3]. Thus it becomes increasingly necessary for the system designers to have knowledge on both hardware and software to make efficient design tradeoffs. This is where hardware/software codesign comes into existence

    Hybrid Multiresolution Simulation & Model Checking: Network-On-Chip Systems

    Get PDF
    abstract: Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation. A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial. In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes. Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    Timed Chi: Modeling, Simulation and Verification of Hardware Systems

    Get PDF
    Timed Chi (chi) is a timed process algebra, designed for Modeling, simulation, verification and real-time control. Its application domain consists of large and complex manufacturing systems. The straightforward syntax and semantics are also highly suited to architects, engineers and researchers from the hardware design community. There are many different tools for timed Chi that support the analysis and manipulation of timed Chi specifications; and such tools are the results of software engineering research with a very strong foundation in formal theories/methods. Since timed Chi is a well-developed algebraic theory from the field of process algebras with timing, we have the idea that timed Chi is also well-suited for addressing various aspects of hardware systems (discrete-time systems by nature). To show that timed Chi is useful for the formal specification and analysis of hardware systems, we illustrate the use of timed Chi with several benchmark examples of hardware systems

    Framework for simulation of fault tolerant heterogeneous multiprocessor system-on-chip

    Full text link
    Due to the ever growing requirement in high performance data computation, current Uniprocessor systems fall short of hand to meet critical real-time performance demands in (i) high throughput (ii) faster processing time (iii) low power consumption (iv) design cost and time-to-market factors and more importantly (v) fault tolerant processing. Shifting the design trend to MPSOCs is a work-around to meet these challenges. However, developing efficient fault tolerant task scheduling and mapping techniques requires optimized algorithms that consider the various scenarios in Multiprocessor environments. Several works have been done in the past few years which proposed simulation based frameworks for scheduling and mapping strategies that considered homogenous systems and error avoidance techniques. However, most of these works inadequately describe today\u27s MPSOC trend because they were focused on the network domain and didn\u27t consider heterogeneous systems with fault tolerant capabilities; In order to address these issues, this work proposes (i) a performance driven scheduling algorithm (PD SA) based on simulated annealing technique (ii) an optimized Homogenous-Workload-Distribution (HWD) Multiprocessor task mapping algorithm which considers the dynamic workload on processors and (iii) a dynamic Fault Tolerant (FT) scheduling/mapping algorithm to employ robust application processing system. The implementation was accompanied by a heterogeneous Multiprocessor system simulation framework developed in systemC/C++. The proposed framework reads user data, set the architecture, execute input task graph and finally generate performance variables. This framework alleviates previous work issues with respect to (i) architectural flexibility in number-of-processors, processor types and topology (ii) optimized scheduling and mapping strategies and (iii) fault-tolerant processing capability focusing more on the computational domain; A set of random as well as application specific STG benchmark suites were run on the simulator to evaluate and verify the performance of the proposed algorithms. The simulations were carried out for (i) scheduling policy evaluation (ii) fault tolerant evaluation (iii) topology evaluation (iv) Number of processor evaluation (v) Mapping policy evaluation and (vi) Processor Type evaluation. The results showed that PD scheduling algorithm showed marginally better performance than EDF with respect to utilization, Execution-Time and Power factors. The dynamic Fault Tolerant implementation showed to be a viable and efficient strategy to meet real-time constraints without posing significant system performance degradation. Torus topology gave better performance than Tile with respect to task completion time and power factors. Executing highly heterogeneous Tasks showed higher power consumption and execution time. Finally, increasing the number of processors showed a decrease in average Utilization but improved task completion time and power consumption; Based on the simulation results, the system designer can compare tradeoffs between a various design choices with respect to the performance requirement specifications. In general, designing an optimized Multiprocessor scheduling and mapping strategy with added fault tolerant capability will enable to develop efficient Multiprocessor systems which meet future performance goal requirements. This is the substance of this work

    IEEE/NASA Workshop on Leveraging Applications of Formal Methods, Verification, and Validation

    Get PDF
    This volume contains the Preliminary Proceedings of the 2005 IEEE ISoLA Workshop on Leveraging Applications of Formal Methods, Verification, and Validation, with a special track on the theme of Formal Methods in Human and Robotic Space Exploration. The workshop was held on 23-24 September 2005 at the Loyola College Graduate Center, Columbia, MD, USA. The idea behind the Workshop arose from the experience and feedback of ISoLA 2004, the 1st International Symposium on Leveraging Applications of Formal Methods held in Paphos (Cyprus) last October-November. ISoLA 2004 served the need of providing a forum for developers, users, and researchers to discuss issues related to the adoption and use of rigorous tools and methods for the specification, analysis, verification, certification, construction, test, and maintenance of systems from the point of view of their different application domains

    Modélisation à haut niveau d'abstraction pour les systèmes embarqués

    No full text
    Modern embedded systems have reached a level of complexity such that it is no longer possible to wait for the first physical prototypes to validate choices on the integration of hardware and software components. It is necessary to use models, early in the design flow. The work presented in this document contribute to the state of the art in several domains. First, we present some verification techniques based on abstract interpretation and SMT-solving for programs written in general-purpose languages like C, C++ or Java. Then, we use verification tools on models written in SystemC at the transaction level (TLM). Several approaches are presented, most of them using compilation techniques specific to SystemC to turn the models into a format usable by existing tools. The second part of the document deal with non-functional properties of models: timing performances, power consumption and temperature. In the context of TLM, we show how functional models can be enriched with non-functional information. Finally, we present contributions to the modular performance analysis (MPA) with real-time calculus (RTC) framework. We describe several ways to connect RTC to more expressive formalisms like timed automata and the synchronous language Lustre. These connections raise the problem of causality, which is defined formally and solved with the new causality closure algorithm.Les systèmes embarqués modernes ont atteint un niveau de complexité qui fait qu'il n'est plus possible d'attendre les premiers prototypes physiques pour valider les décisions sur l'intégration des composants matériels et logiciels. Il est donc nécessaire d'utiliser des modèles, tôt dans le flot de conception. Les travaux présentés dans ce document contribuent à l'état de l'art dans plusieurs domaines. Nous présentons dans un premier temps de nouvelles techniques de vérification de programmes écrits dans des langages généralistes comme C, C++ ou Java. Dans un second temps, nous utilisons des outils de vérification formelle sur des modèles écrits en SystemC au niveau transaction (TLM). Plusieurs approches sont présentées, la plupart d'entre elles utilisent des techniques de compilations spécifiques à SystemC pour transformer le programme SystemC en un format utilisable par les outils. La seconde partie du document s'intéresse aux propriétés non-fonctionnelles des modèles~: performances temporelles, consommation électrique et température. Dans le contexte de la modélisation TLM, nous proposons plusieurs techniques pour enrichir des modèles fonctionnels avec des informations non-fonctionnelles. Enfin, nous présentons les contributions faites à l'analyse de performance modulaire (MPA) avec le calcul temps-réel (RTC). Nous proposons plusieurs connections entre ces modèles analytiques et des formalismes plus expressifs comme les automates temporisés et le langage de programmation Lustre. Ces connexion posent le problème théorique de la causalité, qui est formellement défini et résolu avec un algorithme nouveau dit de " fermeture causale "

    Combining type checking with model checking for system verification

    Full text link
    Type checking is widely used in mainstream programming languages to detect programming errors at compile time. Model checking is gaining popularity as an automated technique for systematically analyzing behaviors of systems. My research focuses on combining these two software verification techniques synergically into one platform for the creation of correct models for software designs. This thesis describes two modeling languages ATS/PML and ATS/Veri that inherit the advanced type system from an existing programming language ATS, in which both dependent types of Dependent ML style and linear types are supported. A detailed discussion is given for the usage of advanced types to detect modeling errors at the stage of model construction. Going further, various modeling primitives with well-designed types are introduced into my modeling languages to facilitate a synergic combination of type checking with model checking. The semantics of ATS/PML is designed to be directly rooted in a well-known modeling language PROMELA. Rules for translation from ATS/PML to PROMELA are designed and a compiler is developed accordingly so that the SPIN model checker can be readily employed to perform checking on models constructed in ATS/PML. ATS/Veri is designed to be a modeling language, which allows a programmer to construct models for real-world multi-threaded software applications in the same way as writing a functional program with support for synchronization, communication, and scheduling among threads. Semantics of ATS/Veri is formally defined for the development of corresponding model checkers and a compiler is built to translate ATS/Veri into CSP# and exploit the state-of-the-art verification platform PAT for model checking ATS/Veri models. The correctness of such a transformational approach is illustrated based on the semantics of ATS/Veri and CSP#. In summary, the primary contribution of this thesis lies in the creation of a family of modeling languages with highly expressive types for modeling concurrent software systems as well as the related platform supporting verification via model checking. As such, we can combine type checking and model checking synergically to ensure software correctness with high confidence
    corecore