3,006 research outputs found

    Simulation verification techniques study: Simulation performance validation techniques document

    Get PDF
    Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described

    Simulation verification techniques study

    Get PDF
    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided

    Phase Locked Loop Test Methodology

    Get PDF
    Phase locked loops are incorporated into almost every large-scale mixed signal and digital system on chip (SOC). Various types of PLL architectures exist including fully analogue, fully digital, semi-digital, and software based. Currently the most commonly used PLL architecture for SOC environments and chipset applications is the Charge-Pump (CP) semi-digital type. This architecture is commonly used for clock synthesis applications, such as the supply of a high frequency on-chip clock, which is derived from a low frequency board level clock. In addition, CP-PLL architectures are now frequently used for demanding RF (Radio Frequency) synthesis, and data synchronization applications. On chip system blocks that rely on correct PLL operation may include third party IP cores, ADCs, DACs and user defined logic (UDL). Basically, any on-chip function that requires a stable clock will be reliant on correct PLL operation. As a direct consequence it is essential that the PLL function is reliably verified during both the design and debug phase and through production testing. This chapter focuses on test approaches related to embedded CP-PLLs used for the purpose of clock generation for SOC. However, methods discussed will generally apply to CP-PLLs used for other applications

    Space station MSFC-DPD-235/DR no. CM-03 specification, modular space station project, Part 1 CEI

    Get PDF
    Contract engineering item specifications for the modular space station are presented. These specifications resulted from the development and allocations of requirements which are concise statements of performance or constraints on performance. Specifications contain requirements for functional performance and for the verification of design solutions

    Verification of the Performance Properties of Embedded Streaming Applications via Constraint-Based Scheduling

    Get PDF
    RÉSUMÉ Les capacités et, en conséquence, la complexité de la conception de systèmes embarqués ont énormément augmenté ces dernières années, surfant sur la vague de la loi de Moore. Au contraire, le temps de mise en marché a diminué, ce qui oblige les concepteurs à faire face à certains défis, ce qui les poussent à adopter de nouvelles méthodes de conception pour accroître leur productivité. En réponse à ces nouvelles pressions, les systèmes modernes ont évolué vers des technologies multiprocesseurs sur puce. De nouvelles architectures sont apparues dans le multitraitement sur puce afin d'utiliser les énormes progrès des technologies de fabrication. Les systèmes multiprocesseurs sur puce (MPSoCs) ont été adoptés comme plates-formes appropriées pour l'exécution d'applications embarquées complexes. Pour réduire le coût de la plate-forme matérielle, les applications partagent des ressources, ce qui peut entraîner des interférences dans le temps entre les applications dues à des conflits dans la demande des ressources. Les caractéristiques d'un SoC typique imposent de grands défis sur la vérification SoC à deux égards. Tout d'abord, la grande échelle de l'intégration du matériel mène à des interactions matériel-matériel sophistiquées. Puisqu’un SoC a de multiples composants, les interactions entre ceux-ci pourraient donner lieu à des propriétés émergentes qui ne sont pas présentes dans un seul composant. En second lieu, l'introduction de logiciels dans le comportement du matériel mène à des interactions matériel-logiciel sophistiqué. Puisqu’un SoC a au moins un processeur, le logiciel constitue une nouvelle dimension des comportements du SoC et donc apporte une nouvelle dimension à la vérification. Cela rend la vérification d'une tâche difficile, en particulier pour les applications de communication et de multimédia. Cela est dû à des contraintes non-fonctionnelles des modules matériel et logiciel, tels que la vitesse du processeur, la taille de la mémoire tampon, le budget de l'énergie, la politique de planification, et la combinaison de multiples applications. Cette thèse préconise la programmation par contraintes (CP) comme un outil puissant pour la vérification des mesures de performance de MPSoCs. Dans ce travail, nous avons considéré des applications de diffusion sur l'architecture cible d’un système-sur-puce (MPSoC) multi-processeur comme un problème d'ordonnancement à base de contraintes. Nous l’avons testé séparément et en interaction avec d'autres types d'applications. L'idée est de créer un scénario au niveau du système qui prend en compte le flux de travail au niveau du système par rapport aux ressources du système et des exigences de performance, à savoir les délais de la tâche, le temps de réponse, le CPU et l’utilisation de la mémoire, ainsi que la taille de la mémoire tampon. Plus précisément, nous examinons si le comportement des différentes interactions entre les composants du système d'exécution des tâches différentes peut être efficacement exprimé comme un problème d'ordonnancement à base de contraintes sur l'espace des entrées possibles du système, afin de déterminer si nous pouvons traiter des cas similaires d'échec en utilisant ce modèle. Résoudre ce problème consiste à trouver une meilleure façon d’inspecter le système en cours de vérification dans une phase de conception qui arrive très tôt et dans un délai beaucoup plus raisonnable. Notre approche proposée a été testée avec diverses applications, différents flux d'entrée et des architectures différentes. Nous avons construit notre modèle en prenant en considération les architectures existantes sur le marché, des applications choisies qui sont en courante et comparé les résultats de notre modèle avec les résultats provenant de l'exécution des applications réelles sur le système. Les résultats montrent que la méthode permet de déterminer les conditions de défaillance du système dans une fraction du temps nécessaire à la vérification par simulation. Il donne à l’ingénieur d’essai la possibilité d'explorer l'espace de conception et d'en déduire la meilleure politique. Il contribue également à choisir une architecture appropriée pour des applications en cours d'exécution.----------ABSTRACT The abilities and, accordingly, the design complexity of embedded systems have expanded enormously in recent years, riding the wave of Moore’s law. On the contrary, time to market has shrunk, forcing challenges onto designers who in turn, seek to adopt new design methods to increase their productivity. As a response to these new pressures, modern-day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Multiprocessor Systems on a Chip (MPSoCs) were adopted as suitable platforms for executing complex embedded applications. To reduce the cost of the hardware platform, applications share resources, which may result in inter-application timing interference due to resource request conflicts. The features of a typical SoC impose great challenges on SoC verification in two respects. First, the large scale of hardware integration leads to sophisticated hardware-hardware interactions. Since a SoC has multiple components, the interactions between them could give rise to emerging properties that are not present in any single component. Second, the introduction of software into hardware behaviour leads to sophisticated hardware-software interactions. Since an SoC has at least one processor, software forms a new dimension of the SoC’s behaviours and hence brings a new dimension to verification. This makes verification a challenging task, in particular for communication and multimedia applications. This is due to the non-functional constraints of hardware and software modules, such as processor speed, buffer size, energy budget, and scheduling policy, and the combination of multiple applications. This thesis advocates Constraint Programming (CP) as a powerful tool for the verification of performance metrics of MPSoCs. In this work, we mapped streaming applications onto a target Multi-Processor System-on-Chip (MPSoC) architecture as a constraint-based scheduling problem. We tested it separately and in interaction with other application types. The idea is to create a system-level scenario that takes into account the system level work-flow with respect to System resources and performance requirements, namely task deadlines, response time, CPU and memory usage, and buffer size. Specifically, we investigate whether the behaviour of different interactions among system components executing different tasks can be effectively re-expressed as a constraint-based scheduling problem over the space of possible inputs to the system, finding if we can address similar cases of failure using this model. Solving this problem means finding a better way to investigate and verify the System under verification in a very early design stage and in a much more reasonable time. Our proposed approach was tested with various applications, different input streams and different architectures. We built our model for existing architectures on the market running chosen applications and compared our model results with the results coming from running the actual applications on the system. Results show that the methodology is able to identify system failure conditions in a fraction of the time needed by simulation-based verification. It gives the Test Engineer the ability to explore the design space and deduce the best policy. It also helps choose a proper architecture for the applications running

    Verification of software-hardware hybrid systems

    Get PDF
    Verification of complex systems with multiple processors is difficult. The reason being that the generation of test cases for the whole system is quite complex. So, the system must be verified in parts and sequentially, i.e., verifying the software, hardware platform separately and the finally software running on the hardware platform. As verification of the MPSoC (Multiple-Systems-on-Chip) platform is beyond the scope of our research, we assume that the MPSoC hardware platform is already verified. Thus, we focus our research on the verification of the MPSoC software (application software running on the MPSoC platform) and the system consisting of the MPSOC software running on the MPSoC platform. Researchers have tried to verify the software portion by generating test cases using metaheutistics, constraint programming and combined metaheuristic- constraint programming approaches. But metaheuristic approaches are not capable of finding good solution as they may get blocked in local optima, whereas constraint programming approaches are not able to generate good test cases when the problem is large and complex. The combined metaheuristic-constraint programming approaches solve these limitations but lose many good test cases when they reduce the domain of the input variables. We want to generate test cases for software while overcoming the limitations mentioned. For this, we propose to combine metaheuristic and constraint programming approaches. In our approach, constraint programming solver will split the input variable domains before reducing them further to be fed into the metaheuristic solver that will generate test cases. Finally, at a later stage of our research, we want to verify the whole system consisting of an application software (DEMOSAIK or FFMPEG 4) running on an MPSoC architecture simulator, the ReSP platform. We propose to generate the test cases from the functional test objectives to check the proper functioning of the software running on the hardware platform. So, we frame the two research questions as: Verification of software by generating test cases so as to satisfy certain coverage criterion and cause the software to fail, and verification of the functional and structural coverage criteria(s) of the system as a whole. We report the results of the preliminary experiments conducted, which helps us to provide a path for the subsequent steps

    7e Nederlandse testdag, Eindhoven, 8 November 2001 : proceedings

    Get PDF
    These are the proceedings of the seventh edition of the Nederlandse Testdag (a.k.a. Dutch Testing Day), held on November 8, 2001 in Eindhoven, The Netherlands. The increase in the complexity of software and hardware systems was the predominant concern in the software design of the last decades. This increase is still going on today. and mastering this complexity is possible, only by investigating, discussing and evaluating methods and techniques for testing such systems. The Nederlandse Testdag serves as a forum in which researchers from the industry and the academia discuss and present their latest experiences and theories in the area of testing. The initiative for organising the Nederlandse Testdag is, and has always been, the result of the combined efforts of the Dutch academia and the industry. The Nederlandse Testdag is an annual event which was first held in 1995. This year's edition again consists of one invited presentation by Jens Grabowski, on ITCN-3. and six regular presentations, both from the academia and from the industry. The presentations capture a broad field of the entire testing spectrum. In the presentation by Martin Gijsen (CMG), test automation for Graphical User Interface (GUI), dedicated and embedded systems according to the TestFrame methodology is explained. Klaas Mateboer (Collis) presents the test-tool Conclusion. René de Vries (University of Twente) reports on specification testing in practice and illustrates this by means of an example. In the presentation by Loe Feijs (Eindhoven University of Technology), testing is related to game-theory. Marcel Verhoef (Chess) and Bertil Oving (NLR) present their experiences using real-time simulation, UML and VDM to obtain more reliable spacecraft avionics. Finally, Ben van Buitenen (Baan), provides an insight in service pack testing: how to efficiently test customised software components and packages. The organisation of the Nederlandse Testdag is grateful for the sponsorship it received from the Eindhoven University of Technology, the Eindhoven Embedded Systems Institute, and the financial support from Dutch Research School IPA. We are very much indebted to CMG and Telelogic's willingness to sponsor this event financially. Over the years, both companies have profiled themselves as companies investing both time and resources in advancing the current state in testing. Finally, the organisation thanks Marcella de Rooij and EIize Russell for their organisational assistance

    7e Nederlandse testdag, Eindhoven, 8 November 2001 : proceedings

    Get PDF
    These are the proceedings of the seventh edition of the Nederlandse Testdag (a.k.a. Dutch Testing Day), held on November 8, 2001 in Eindhoven, The Netherlands. The increase in the complexity of software and hardware systems was the predominant concern in the software design of the last decades. This increase is still going on today. and mastering this complexity is possible, only by investigating, discussing and evaluating methods and techniques for testing such systems. The Nederlandse Testdag serves as a forum in which researchers from the industry and the academia discuss and present their latest experiences and theories in the area of testing. The initiative for organising the Nederlandse Testdag is, and has always been, the result of the combined efforts of the Dutch academia and the industry. The Nederlandse Testdag is an annual event which was first held in 1995. This year's edition again consists of one invited presentation by Jens Grabowski, on ITCN-3. and six regular presentations, both from the academia and from the industry. The presentations capture a broad field of the entire testing spectrum. In the presentation by Martin Gijsen (CMG), test automation for Graphical User Interface (GUI), dedicated and embedded systems according to the TestFrame methodology is explained. Klaas Mateboer (Collis) presents the test-tool Conclusion. René de Vries (University of Twente) reports on specification testing in practice and illustrates this by means of an example. In the presentation by Loe Feijs (Eindhoven University of Technology), testing is related to game-theory. Marcel Verhoef (Chess) and Bertil Oving (NLR) present their experiences using real-time simulation, UML and VDM to obtain more reliable spacecraft avionics. Finally, Ben van Buitenen (Baan), provides an insight in service pack testing: how to efficiently test customised software components and packages. The organisation of the Nederlandse Testdag is grateful for the sponsorship it received from the Eindhoven University of Technology, the Eindhoven Embedded Systems Institute, and the financial support from Dutch Research School IPA. We are very much indebted to CMG and Telelogic's willingness to sponsor this event financially. Over the years, both companies have profiled themselves as companies investing both time and resources in advancing the current state in testing. Finally, the organisation thanks Marcella de Rooij and EIize Russell for their organisational assistance

    Universal computer test stand (recommended computer test requirements)

    Get PDF
    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined
    • …
    corecore