575 research outputs found

    System-Level Modeling, Analysis and Code Generation: Object Recognition Case Study

    Get PDF
    International audienceOne of the most important challenges in complex embedded systems design is developing methods and tools for modeling and analyzing the behavior of application software running on multi-processor platforms. We propose a tool-supported flow for systematic and compositional construction of mixed software/hardware system models. These models are intended to represent, in an operational way, the set of timed executions of parallel application software statically mapped on a multi-processor platform. As such, system models will be used for performance analysis using simulation-based techniques as well as for code generation on specific platforms. The construction of the system model proceeds in two steps. In the first step, an abstract system model is obtained by composition and specific transformations of (1) the (untimed) model of the application software, (2) the model of the platform and (3) the mapping between them. In the second step, the abstract system model is refined into concrete system model, by including specific timing constraints for execution of the application software, according to chosen mapping on the platform. We illustrate the system model construction method and its use for performance analysis and code generation on an object recognition application provided by Hellenic Airspace Industry. This case study is build upon the HMAX models algorithm [RP99] and is looking at significant speedup factors. This paper reports results obtained on different system model configurations and used to determine the optimal implementation strategy in accordance to hardware resources

    Component Assemblies in the Context of Manycore

    No full text
    International audienceWe present a component-based software design flow for building parallel applications running on top of manycore platforms. The flow is based on the BIP - Behaviour, Interaction, Priority - component frameworkand its associated toolbox. It provides full support for modeling of application software, validation of its functional correctness, modeling and performance analysis on system-level models, code generation and deployment on target manycore platforms. The paper details some of the steps of the design flow. The design flow is illustrated through the modeling and deployment of two applications, the Cholesky factorization and the MJPEG decoding on MPARM, an ARM-based manycore platform. We emphasize the merits of the design flow, notably fast performance analysis as well as code generation and effi cient deployment on manycore platforms

    Contract Aware Components, 10 years after

    Get PDF
    The notion of contract aware components has been published roughly ten years ago and is now becoming mainstream in several fields where the usage of software components is seen as critical. The goal of this paper is to survey domains such as Embedded Systems or Service Oriented Architecture where the notion of contract aware components has been influential. For each of these domains we briefly describe what has been done with this idea and we discuss the remaining challenges.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    A Methodology and Supporting Tools for the Development of Component-Based Embedded Systems.

    Get PDF
    International audienceThe paper presents a methodology and supporting tools for developing component-based embedded systems running on resource- limited hardware platforms. The methodology combines two complementary component frameworks in an integrated tool chain: BIP and Think. BIP is a framework for model-based development including a language for the description of heterogeneous systems, as well as associated simulation and veriïŹcation tools. Think is a software component framework for the generation of small-footprint embedded systems. The tool chain allows generation, from system models described in BIP, of a set of func tionally equivalent Think components. From these and libraries including OS services for a given hardware platform, a minimal system can be generated. We illustrate the results by modeling and implementing a software MPEG encoder on an iPod

    A Timed-Automata Based Middleware for Time-Critical Multicore Applications

    No full text
    International audienceThe goal of our work is to contribute to unification of design methodologies for multi-core time-critical systems. Various models of computation have been proposed in literature for this kind of systems, but lack of coherency between them makes unified coherent design methodology challenging. In addition, there is a significant gap between the models of computation and the real-time scheduling and analysis techniques. To overcome this difficulty, we represent both the models of computation and the scheduling policies by timed automata. While, traditionally, they are only used for simulation and validation, we use the automata for programming. We believe that using the same formal language for different design styles and methods is an important step to close the gap between them. Our approach is demonstrated using a publicly available toolset, an industrial application use case and a multi-core platform

    Stochastic Modeling and Performance Analysis of Multimedia SoCs

    Get PDF
    International audienceQuality of video and audio output is a design-time constraint for portable multimedia devices. Unfortunately, there is a huge cost (e.g. buffer size) incurred to deterministically guarantee good playout quality; the worst-case workload and the timing behavior can be significantly larger than the average-case due to high variability in a multimedia system. In future mobile devices, the playout buffer size is expected to increase, so, buffer dimensioning will remain as an important problem in system design. We propose a probabilistic analytical framework that enables low-cost system design and provides bounds for playing acceptable multimedia quality. We compare our approach with a framework comprising both simulation and statistical model checking, built to simulate large embedded systems in detail. Our results show significant reduction in output buffer size compared to deterministic frameworks

    Ordonnancement des systÚmes avec différents niveaux de criticité

    Get PDF
    Real-time safety-critical systems must complete their tasks within a given time limit. Failure to successfully perform their operations, or missing a deadline, can have severe consequences such as destruction of property and/or loss of life. Examples of such systems include automotive systems, drones and avionics among others. Safety guarantees must be provided before these systems can be deemed usable. This is usually done through certification performed by a certification authority.Safety evaluation and certification are complicated and costly even for smaller systems.One answer to these difficulties is the isolation of the critical functionality. Executing tasks of different criticalities on separate platforms prevents non-critical tasks from interfering with critical ones, provides a higher guaranty of safety and simplifies the certification process limiting it to only the critical functions. But this separation, in turn, introduces undesirable results portrayed by an inefficient resource utilization, an increase in the cost, weight, size and energy consumption which can put a system in a competitive disadvantage.To overcome the drawbacks of isolation, Mixed Criticality (MC) systems can be used. These systems allow functionalities with different criticalities to execute on the same platform. In 2007, Vestal proposed a model to represent MC-systems where tasks have multiple Worst Case Execution Times (WCETs), one for each criticality level. In addition, correctness conditions for scheduling policies were formally defined, allowing lower criticality jobs to miss deadlines or be even dropped in cases of failure or emergency situations.The introduction of multiple WCETs and different conditions for correctness increased the difficulty of the scheduling problem for MC-systems. Conventional scheduling policies and schedulability tests proved inadequate and the need for new algorithms arose. Since then, a lot of work has been done in this field.In this thesis, we contribute to the study of schedulability in MC-systems. The workload of a system is represented as a set of jobs that can describe the execution over the hyper-period of tasks or over a duration in time. This model allows us to study the viability of simulation-based correctness tests in MC-systems. We show that simulation tests can still be used in mixed-criticality systems, but in this case, the schedulability of the worst case scenario is no longer sufficient to guarantee the schedulability of the system even for the fixed priority scheduling case. We show that scheduling policies are not predictable in general, and define the concept of weak-predictability for MC-systems. We prove that a specific class of fixed priority policies are weakly predictable and propose two simulation-based correctness tests that work for weakly-predictable policies.We also demonstrate that contrary to what was believed, testing for correctness can not be done only through a linear number of preemptions.The majority of the related work focuses on systems of two criticality levels due to the difficulty of the problem. But for automotive and airborne systems, industrial standards define four or five criticality levels, which motivated us to propose a scheduling algorithm that schedules mixed-criticality systems with theoretically any number of criticality levels. We show experimentally that it has higher success rates compared to the state of the art.We illustrate how our scheduling algorithm, or any algorithm that generates a single time-triggered table for each criticality mode, can be used as a recovery strategy to ensure the safety of the system in case of certain failures.Finally, we propose a high level concurrency language and a model for designing an MC-system with coarse grained multi-core interference.Les systĂšmes temps-rĂ©el critiques doivent exĂ©cuter leurs tĂąches dans les dĂ©lais impartis. En cas de dĂ©faillance, des Ă©vĂ©nements peuvent avoir des catastrophes Ă©conomiques. Des classifications des dĂ©faillances par rapport aux niveaux des risques encourus ont Ă©tĂ© Ă©tablies, en particulier dans les domaines des transports aĂ©ronautique et automobile. Des niveaux de criticitĂ© sont attribuĂ©s aux diffĂ©rentes fonctions des systĂšmes suivant les risques encourus lors d'une dĂ©faillance et des probabilitĂ©s d'apparition de celles-ci. Ces diffĂ©rents niveaux de criticitĂ© influencent les choix d'architecture logicielle et matĂ©rielle ainsi que le type de composants utilisĂ©s pour sa rĂ©alisation. Les systĂšmes temps-rĂ©els modernes ont tendance Ă  intĂ©grer sur une mĂȘme plateforme de calcul plusieurs applications avec diffĂ©rents niveaux de criticitĂ©. Cette intĂ©gration est nĂ©cessaire pour des systĂšmes modernes comme par exemple les drones (UAV) afin de rĂ©duire le coĂ»t, le poids et la consommation d'Ă©nergie. Malheureusement, elle conduit Ă  des difficultĂ©s importantes lors de leurs conceptions. En plus, ces systĂšmes doivent ĂȘtre certifiĂ©s en prenant en compte ces diffĂ©rents niveaux de criticitĂ©s.Il est bien connu que le problĂšme d'ordonnancement des systĂšmes avec diffĂ©rents niveaux de criticitĂ©s reprĂ©sente un des plus grand dĂ©fi dans le domaine de systĂšmes temps-rĂ©el. Les techniques traditionnelles proposent comme solution l’isolation complĂšte entre les niveaux de criticitĂ© ou bien une certification globale au plus haut niveau. Malheureusement, une telle solution conduit Ă  une mauvaise des ressources et Ă  la perte de l’avantage de cette intĂ©gration. En 2007, Vestal a proposĂ© un modĂšle pour reprĂ©senter les systĂšmes avec diffĂ©rents niveaux de criticitĂ© dont les tĂąches ont plusieurs temps d’exĂ©cution, un pour chaque niveau de criticitĂ©. En outre, les conditions de validitĂ© des stratĂ©gies d’ordonnancement ont Ă©tĂ© dĂ©finies de maniĂšre formelle, permettant ainsi aux tĂąches les moins critiques d’échapper aux dĂ©lais, voire d’ĂȘtre abandonnĂ©es en cas de dĂ©faillance ou de situation d’urgence.Les politiques de planification conventionnelles et les tests d’ordonnoncement se sont rĂ©vĂ©lĂ©s inadĂ©quats.Dans cette thĂšse, nous contribuons Ă  l’étude de l’ordonnancement dans les systĂšmes avec diffĂ©rents niveaux de criticitĂ©. La surcharge d'un systĂšme est reprĂ©sentĂ©e sous la forme d'un ensemble de tĂąches pouvant dĂ©crire l'exĂ©cution sur l'hyper-pĂ©riode de tĂąches ou sur une durĂ©e donnĂ©e. Ce modĂšle nous permet d’étudier la viabilitĂ© des tests de correction basĂ©s sur la simulation pour les systĂšmes avec diffĂ©rents niveaux de criticitĂ©. Nous montrons que les tests de simulation peuvent toujours ĂȘtre utilisĂ©s pour ces systĂšmes, et la possibilitĂ© de l’ordonnancement du pire des scĂ©narios ne suffit plus, mĂȘme pour le cas de l’ordonnancement avec prioritĂ© fixe. Nous montrons que les politiques d'ordonnancement ne sont gĂ©nĂ©ralement pas prĂ©visibles. Nous dĂ©finissons le concept de faible prĂ©visibilitĂ© pour les systĂšmes avec diffĂ©rents niveaux de criticitĂ© et nous montrons ensuite qu'une classe spĂ©cifique de stratĂ©gies Ă  prioritĂ© fixe sont faiblement prĂ©visibles. Nous proposons deux tests de correction basĂ©s sur la simulation qui fonctionnent pour des stratĂ©gies faiblement prĂ©visibles.Nous montrons Ă©galement que, contrairement Ă  ce que l’on croyait, le contrĂŽle de l’exactitude ne peut se faire que par l’intermĂ©diaire d’un nombre linĂ©aire de prĂ©emptions.La majoritĂ© des travaux reliĂ©s Ă  notre domaine portent sur des systĂšmes Ă  deux niveaux de criticitĂ© en raison de la difficultĂ© du problĂšme. Mais pour les systĂšmes automobiles et aĂ©riens, les normes industrielles dĂ©finissent quatre ou cinq niveaux de criticitĂ©, ce qui nous a motivĂ©s Ă  proposer un algorithme de planification qui planifie les systĂšmes Ă  criticitĂ© mixte avec thĂ©oriquement un nombre quelconque de niveaux de criticitĂ©. Nous montrons expĂ©rimentalement que le taux de rĂ©ussite est supĂ©rieur Ă  celui de l’état de la technique
    • 

    corecore