25 research outputs found

    Building Faithful High-level Models and Performance Evaluation of Manycore Embedded Systems

    Get PDF
    International audiencePerformance and functional correctness are key for successful design of modern embedded systems. Both aspects must be considered early in the design process to enable founded decision making towards final implementation. Nonetheless, building abstract system-level models that faithfully capture performance information along to functional behavior is a challenging task. In contrast to functional aspects, performance details are rarely available during early design phases and no clear method is known to characterize them. Moreover, once such system-level models are built they are inherently complex as they usually mix software models, hardware architecture constraints and environment abstractions. Their analysis by using traditional performance evaluation methods is reaching the limits and the need for more scalable and accurate techniques is becoming urgent. In this paper, we introduce a systematic method for building stochastic abstract performance models using statistical inference and model calibration and we propose statistical model checking as performance evaluation technique upon the obtained models. We experimented our method on a real-life case study. We were able to verify different timing properties

    Performance Evaluation of Complex Systems Using the SBIP Framework

    Get PDF
    International audienceIn this paper we survey the main experiments performed using the SBIP framework. The latter consists of a stochastic component-based modeling formalism and a probabilistic model checking engine for verification. The modeling formalism is built as an extension of BIP and enables to build complex systems in a compositional way, while the verification engine implements a set of statistical algorithms for the verification of qualitative and quantitative properties. The SBIP framework has been used to model and verify a large set of real life systems including various network protocols and multimedia applications

    Stochastic Modeling and Performance Analysis of Multimedia SoCs

    Get PDF
    International audienceQuality of video and audio output is a design-time constraint for portable multimedia devices. Unfortunately, there is a huge cost (e.g. buffer size) incurred to deterministically guarantee good playout quality; the worst-case workload and the timing behavior can be significantly larger than the average-case due to high variability in a multimedia system. In future mobile devices, the playout buffer size is expected to increase, so, buffer dimensioning will remain as an important problem in system design. We propose a probabilistic analytical framework that enables low-cost system design and provides bounds for playing acceptable multimedia quality. We compare our approach with a framework comprising both simulation and statistical model checking, built to simulate large embedded systems in detail. Our results show significant reduction in output buffer size compared to deterministic frameworks

    SBIP 2.0: Statistical Model Checking Stochastic Real-time Systems

    Get PDF
    International audienceThis paper presents a major new release of SBIP, an extensi-ble statistical model checker for Metric (MTL) and Linear-time Temporal Logic (LTL) properties on respectively Generalized Semi-Markov Processes (GSMP), Continuous-Time (CTMC) and Discrete-Time Markov Chain (DTMC) models. The newly added support for MTL, GSMPs, CTMCs and rare events allows to capture both real-time and stochastic aspects, allowing faithful specification, modeling and analysis of real-life systems. SBIP is redesigned as an IDE providing project management, model edition, compilation, simulation, and statistical analysis

    The performance of stochastic designs in wellbore drilling operations

    Get PDF
    © 2018, The Author(s). Wellbore drilling operations frequently entail the combination of a wide range of variables. This is underpinned by the numerous factors that must be considered in order to ensure safety and productivity. The heterogeneity and sometimes unpredictable behaviour of underground systems increases the sensitivity of drilling activities. Quite often the operating parameters are set to certify effective and efficient working processes. However, failings in the management of drilling and operating conditions sometimes result in catastrophes such as well collapse or fluid loss. This study investigates the hypothesis that optimising drilling parameters, for instance mud pressure, is crucial if the margin of safe operating conditions is to be properly defined. This was conducted via two main stages: first a deterministic analysis—where the operating conditions are predicted by conventional modelling procedures—and then a probabilistic analysis via stochastic simulations—where a window of optimised operation conditions can be obtained. The outcome of additional stochastic analyses can be used to improve results derived from deterministic models. The incorporation of stochastic techniques in the evaluation of wellbore instability indicates that margins of the safe mud weight window are adjustable and can be extended considerably beyond the limits of deterministic predictions. The safe mud window is influenced and hence can also be amended based on the degree of uncertainty and the permissible level of confidence. The refinement of results from deterministic analyses by additional stochastic simulations is vital if a more accurate and reliable representation of safe in situ and operating conditions is to be obtained during wellbore operations.Published versio

    Modélisation et Évaluation de Performance pour la Conception des Systèmes Embarqués : Approche Rigoureuse au Niveau Système

    No full text
    In the present work, we tackle the problem of modeling and evaluating performance in the context of embedded systems design. These have become essential for modern societies and experienced important evolution. Due to the growing demand on functionality and programmability, software solutions have gained in importance, although known to be less efficient than dedicated hardware. Consequently, considering performance has become a must, especially with the generalization of resource-constrained devices. We present a rigorous and integrated approach for system-level performance modeling and analysis. The proposed method enables faithful high-level modeling, encompassing both functional and performance aspects, and allows for rapid and accurate quantitative performance evaluation. The approach is model-based and relies on the mathcal{S}BIP formalism for stochastic component-based modeling and formal verification. We use statistical model checking for analyzing performance requirements and introduce a stochastic abstraction technique to enhance its scalability. Faithful high-level models are built by calibrating functional models with low-level performance information using automatic code generation and statistical inference. We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on a real-life case study for image processing. We consider the design and mapping of a parallel version of the HMAX models algorithm for object recognition on the STHORM many-cores platform. We explored timing aspects and the obtained results show not only the usability of the approach but also its pertinence for taking well-founded decisions in the context of system-level design.Les systèmes embarqués ont évolué d'une manière spectaculaire et sont devenus partie intégrante de notre quotidien. En réponse aux exigences grandissantes en termes de nombre de fonctionnalités et donc de flexibilité, les parties logicielles de ces systèmes se sont vues attribuer une place importante malgré leur manque d'efficacité, en comparaison aux solutions matérielles. Par ailleurs, vu la prolifération des systèmes nomades et à ressources limités, tenir compte de la performance est devenu indispensable pour bien les concevoir. Dans cette thèse, nous proposons une démarche rigoureuse et intégrée pour la modélisation et l'évaluation de performance tôt dans le processus de conception. Cette méthode permet de construire des modèles, au niveau système, conformes aux spécifications fonctionnelles, et intégrant les contraintes non-fonctionnelles de l'environnement d'exécution. D'autre part, elle permet d'analyser quantitativement la performance de façon rapide et précise. Cette méthode est guidée par les modèles et se base sur le formalisme mathcal{S}BIP que nous proposons pour la modélisation stochastique selon une approche formelle et par composants. Pour construire des modèles conformes au niveau système, nous partons de modèles purement fonctionnels utilisés pour générer automatiquement une implémentation distribuée, étant donnée une architecture matérielle cible et un schéma de répartition. Dans le but d'obtenir une description fidèle de la performance, nous avons conçu une technique d'inférence statistique qui produit une caractérisation probabiliste. Cette dernière est utilisée pour calibrer le modèle fonctionnel de départ. Afin d'évaluer la performance de ce modèle, nous nous basons sur du model checking statistique que nous améliorons à l'aide d'une technique d'abstraction. Nous avons développé un flot de conception qui automatise la majorité des phases décrites ci-dessus. Ce flot a été appliqué à différentes études de cas, notamment à une application de reconnaissance d'image déployée sur la plateforme multi-cœurs STHORM

    Performance Evaluation of Complex Systems Using the SBIP Framework

    Get PDF
    International audienceIn this paper we survey the main experiments performed using the SBIP framework. The latter consists of a stochastic component-based modeling formalism and a probabilistic model checking engine for verification. The modeling formalism is built as an extension of BIP and enables to build complex systems in a compositional way, while the verification engine implements a set of statistical algorithms for the verification of qualitative and quantitative properties. The SBIP framework has been used to model and verify a large set of real life systems including various network protocols and multimedia applications

    Synthesizing Distributed Scheduling Implementation for Probabilistic Component-based Systems

    Get PDF
    International audienceDeveloping concurrent systems typically involves a lengthy debugging period, due to the huge number of possible intricate behaviors. Using a high level description formalism at the intermediate level between the specification and the code can vastly help reduce the cost of this process, and the existence of remaining bugs in the deployed code. Verification is much more affordable at this level. An automatic translation of component based systems into running code, which preserves the temporal properties of the design, helps synthesizing reliable code. We provide here a transformation from a high level description formalism of component based system with probabilistic choices into running code. This transformation involves synchronization using shared variables. This synchronization is component-based rather than interaction-based, because of the need to guarantee a stable view for a component that performs probabilistic choice. We provide the synchronization algorithm and report on the implementation
    corecore