11 research outputs found

    Mixed Critical Earliest Deadline First

    No full text
    International audienceUsing the advances of the modern microelectronics technology, the safety-critical systems, such as avionics, can reduce their costs by integrating multiple tasks on one device. This makes such systems essentially mixed-critical, as this brings together different tasks whose safety assurance requirements may differ significantly. In the context of mixed-critical scheduling theory, we studied the dual criticality problem of scheduling a finite set of hard real-time jobs. In this work we propose an algorithm which is proved to dominate OCBP, a state-of-the art algorithm for this problem that is optimal over fixed job priority algorithms. We show through empirical studies that our algorithm can reduce the set of non-schedulable instances by a factor of two or, under certain assumptions, by a factor of four, when compared to OCBP

    Multiprocessor Scheduling of Precedence-constrained Mixed-Critical Jobs

    No full text
    International audienceThe real-time system design targeting multiprocessor platforms leads to two important complications in real-time scheduling. First, to ensure deterministic processing by communicating tasks the scheduling has to consider precedence constraints. The second complication factor is mixed criticality, i.e., integration upon a single platform of various subsystems where some are safety-critical (e.g., car braking system) and the others are not (e.g., car digital radio). Therefore we motivate and study the multiprocessor scheduling problem of a finite set of precedence-related mixed criticality jobs. This problem, to our knowledge, has never been studied if not under very specific assumptions. The main contribution of our work is an algorithm that, given a global fixed-priority assignment for jobs, can modify it in order to improve its schedulability for mixed-criticality setting. Our experiments show an increase of schedulable instances up to a maximum of 25% if compared to classical solutions for this category of scheduling problems

    Modeling Mixed-critical Systems in Real-time BIP

    No full text
    International audienceThe proliferation of multi- and manycores creates an important design problem: the design and verification for mixed-criticality constraints in timing and safety, taking into account the resource sharing and hardware faults. In our work, we aim to contribute towards the solution of these problems by using a formal design language - the real time BIP, to model both hardware and software, functionality and scheduling. In this paper we present the initial experiments of modeling mixed-criticality systems in BIP

    A Timed-Automata Based Middleware for Time-Critical Multicore Applications

    No full text
    International audienceThe goal of our work is to contribute to unification of design methodologies for multi-core time-critical systems. Various models of computation have been proposed in literature for this kind of systems, but lack of coherency between them makes unified coherent design methodology challenging. In addition, there is a significant gap between the models of computation and the real-time scheduling and analysis techniques. To overcome this difficulty, we represent both the models of computation and the scheduling policies by timed automata. While, traditionally, they are only used for simulation and validation, we use the automata for programming. We believe that using the same formal language for different design styles and methods is an important step to close the gap between them. Our approach is demonstrated using a publicly available toolset, an industrial application use case and a multi-core platform

    Models for Deterministic Execution of Real-Time Multiprocessor Applications

    No full text
    International audienceWith the proliferation of multi-cores in embedded real-time systems, many industrial applications are being (re-)targeted to multiprocessor platforms. However, exactly reproducible data values at the outputs as function of the data and timing of the inputs is less trivial to realize in multiprocessors, while it can be imperative for various practical reasons. Also for parallel platforms it is harder to evaluate the task utilization and ensure schedulability, especially for end-to-end communication timing constraints and aperiodic events. Based upon reactive system extensions of Kahn process networks, we propose a model of computation that employs synchronous events and event priority relations to ensure deterministic execution. For this model, we propose an online scheduling policy and establish a link to a well-developed scheduling theory. We also implement this model in publicly available prototype tools and evaluate them on state-of-the art multi-core hardware, with a streaming benchmark and an avionics case study

    Antecedentes y perspectivas de algunas enfermedades prioritarias que afectan a la ganadería bovina en México

    Get PDF
    The review focused on concisely presenting the contributions that INIFAP researchers have developed, directly or in collaboration with researchers from other institutions, on different aspects of the diseases that affect cattle farming in Mexico. It describes the research on viral diseases such as rabies and bovine viral diarrhea; bacterial diseases such as anaplasmosis, brucellosis, tuberculosis, paratuberculosis, leptospirosis and bovine respiratory disease, and among parasitic diseases, tick infestation and babesiosis. It identifies potential lines of research that can help mitigate the impact of diseases on production. It considers contributions on the development or adaptation of serological and molecular diagnostic techniques and the diagnosis of resistance to ixodicides. In addition, it indicates epidemiological parameters of the diseases and makes reference to the biologics generated, which include vaccines against rabies, anaplasmosis and babesiosis; bacterin against leptospirosis, and a bacterin-toxoid against pneumonia. It also discusses the evaluations of the use of BCG against tuberculosis and a new generation vaccine against brucellosis. The review concludes that the research of INIFAP in animal health must necessarily have the omic sciences as a perspective. This is the only way to complement the understanding of disease mechanisms, the development of new diagnostic techniques and the design of effective and safe vaccines. Therefore, the great challenge will be the involvement of the animal health area in the concept of "One Health".La revisión se enfocó en presentar de manera concisa las aportaciones que investigadores del INIFAP, han desarrollado directamente o en colaboración con investigadores de otras instituciones sobre diferentes aspectos de las enfermedades que afectan a la ganadería bovina en México. Se describen investigaciones sobre enfermedades virales como la rabia y la diarrea viral bovina; bacterianas como la anaplasmosis, brucelosis, tuberculosis, paratuberculosis, leptospirosis y enfermedad respiratoria bovina; de las enfermedades parasitarias se incluye a la infestación por garrapatas y a la babesiosis. Se identifican posibles líneas de investigación que pueden coadyuvar a mitigar el impacto de las enfermedades en la producción. Se señalan aportes sobre el desarrollo o adaptación de técnicas diagnósticas de tipo serológico y molecular y se considera el diagnóstico de resistencia a los ixodicidas. Además, se indican parámetros epidemiológicos de las enfermedades y se refieren los biológicos generados que comprenden vacuna contra rabia, anaplasmosis y babesiosis; bacterina contra leptospirosis y una bacterina-toxoide contra neumonías. Asimismo, se comentan las evaluaciones del uso de BCG contra tuberculosis y una vacuna de nueva generación contra la brucelosis. En la revisión se concluye que la investigación del INIFAP en salud animal debe forzosamente tener como perspectiva las ciencias ómicas. Solo así se complementará el entendimiento de los mecanismos de las enfermedades, el desarrollo de nuevas técnicas diagnósticas y el diseño de vacunas efectivas y seguras. De modo que el gran reto será el involucramiento del área de salud animal al concepto de "Una Salud"

    Ordonnancement des systèmes certifiés avec différents niveaux de criticité

    No full text
    Modern real-time systems tend to be mixed-critical, in the sense that they integrate on the same computational platform applications at different levels of criticality. Integration gives the advantages of reduced cost, weight and power consumption, which can be crucial for modern applications like Unmanned Aerial Vehicles (UAVs). On the other hand, this leads to major complications in system design. Moreover, such systems are subject to certification, and different criticality levels needs to be certified at different level of assurance. Among other aspects, the real-time scheduling of certifiable mixed critical systems has been recognized to be a challenging problem. Traditional techniques require complete isolation between criticality levels or global certification to the highest level of assurance, which leads to resource waste, thus loosing the advantage of integration. This led to a novel wave of research in the real-time community, and many solutions were proposed. Among those, one of the most popular methods used to schedule such systems is Audsley approach. However this method has some limitations, which we discuss in this thesis. These limitations are more pronounced in the case of multiprocessor scheduling. In this case priority-based scheduling looses some important properties. For this reason scheduling algorithms for multiprocessor mixed-critical systems are not as numerous in literature as the single processor ones, and usually are built on restrictive assumptions. This is particularly problematic since industrial real-time systems strive to migrate from single-core to multi-core and many-core platforms. Therefore we motivate and study a different approach that can overcome these problems.A restriction of practical usability of many mixed-critical and multiprocessor scheduling algorithms is assumption that jobs are independent. In reality they often have precedence constraints. In the thesis we show the mixed-critical variant of the problem formulation and extend the system load metrics to the case of precedence-constraint task graphs. We also show that our proposed methodology and scheduling algorithm MCPI can be extended to the case of dependent jobs without major modification and showing similar performance with respect to the independent jobs case. Another topic we treated in this thesis is time-triggered scheduling. This class of schedulers is important because they considerably reduce the uncertainty of job execution intervals thus simplifying the safety-critical system certification. They also simplify any auxiliary timing-based analyses that may be required to validate important extra-functional properties in embedded systems, such as interference on shared buses and caches, peak power dissipation, electromagnetic interference etc..The trivial method of obtaining a time-triggered schedule is simulation of the worst-case scenario in event-triggered algorithm. However, when applied directly, this method is not efficient for mixed-critical systems, as instead of one worst-case scenario they have multiple corner-case scenarios. For this reason, it was proposed in the literature to treat all scenarios into just a few tables, one per criticality mode. We call this scheduling approach Single Time Table per Mode (STTM) and propose a contribution in this context. In fact we introduce a method that transforms practically any scheduling algorithm into an STTM one. It works optimally on single core and shows good experimental results for multi-cores.Finally we studied the problem of the practical realization of mixed critical systems. Our effort in this direction is a design flow that we propose for multicore mixed critical systems. In this design flow, as the model of computation we propose a network of deterministic multi-periodic synchronous processes. Our approach is demonstrated using a publicly available toolset, an industrial application use case and a multi-core platform.Les systèmes temps-réels modernes ont tendance à obtenir la criticité mixte, dans le sens où ils intègrent sur une même plateforme de calcul plusieurs applications avec différents niveaux de criticités. D'un côté, cette intégration permet de réduire le coût, le poids et la consommation d'énergie. Ces exigences sont importantes pour des systèmes modernes comme par exemple les drones (UAV). De l'autre, elle conduit à des complications majeures lors de leur conception. Ces systèmes doivent être certifiés en prenant en compte ces différents niveaux de criticités. L'ordonnancement temps réel des systèmes avec différents niveaux de criticités est connu comme étant l’un des plus grand défi dans le domaine. Les techniques traditionnelles nécessitent une isolation complète entre les niveaux de criticité ou bien une certification globale au plus haut niveau. Une telle solution conduit à un gaspillage des ressources, et à la perte de l’avantage de cette intégration. Ce problème a suscité une nouvelle vague de recherche dans la communauté du temps réel, et de nombreuses solutions ont été proposées. Parmi elles, l'une des méthodes la plus utilisée pour ordonnancer de tels systèmes est celle d'Audsley. Malheureusement, elle a un certain nombre de limitations, dont nous parlerons dans cette thèse. Ces limitations sont encore beaucoup plus accentuées dans le cas de l'ordonnancement multiprocesseur. Dans ce cas précis, l'ordonnancement basé sur la priorité perd des propriétés importantes. C’est la raison pour laquelle, les algorithmes d'ordonnancement avec différents niveaux de criticités pour des architectures multiprocesseurs ne sont que très peu étudiés et ceux qu’on trouve dans la littérature sont généralement construits sur des hypothèses restrictives. Cela est particulièrement problématique car les systèmes industriels temps réel cherchent à migrer vers plates-formes multi-cœurs. Dans ce travail nous proposons une approche différente pour résoudre ces problèmes

    Priority-based scheduling of mixed-critical jobs

    No full text
    International audienceModern real-time systems tend to be mixed-critical, in the sense that they integrate on the same computational platform applications at different levels of criticality (e.g., safety critical and mission critical). Scheduling of such systems is a popular topic in literature due to the complexity and importance of the problem. In this paper we propose two algorithms for job scheduling in mixed critical systems: mixed criticality earliest deadline first (MCEDF) and mixed critical priority improvement (MCPI). MCEDF is a single processor algorithm that theoretically dominates state-of-the-art fixed-priority algorithm own criticality based priority (OCBP), while having a better computational complexity. The dominance is achieved by profiting from a common extension of fixed-priority online policy to mixed criticality. MCPI is a multiprocessor algorithm that supports dependency constraints. Experiments show good schedulability results. Also we formally prove that both MCEDF and MCPI are optimal in a particular class of algorithms

    Extended Abstract: Process Networks for Reactive Streaming with Timed-automata Implementation

    No full text
    International audienceMost of modern academic tool flows for embedded real-time systems support either the streaming or the reactive-control class of application programming models. These two classed have historically developed two different design methodologies. The former, such as CompSoc [9], are typically dataflow-related and is based on the analysis and optimization of timing properties in system steady state. The latter, such as Prelude [4], are based on synchronous language compilation and classical real-time schedulability analysis. However, when implementing modern complex applications (such as avionics, satellite and robotics control systems) on many-core platforms we encounter disadvantages of the separation of systems into two classes. Focusing on only one of them imposes certain undesirable methodological restrictions that are not necessarily present in the other one. We present our current ideas towards unifying these two classes. To this end, in this abstract we discuss a recently developed [11] model of computation: Fixed-Priority Process Networks (FPPN). FPPNs extend streaming models by support of time-dependent (yet deterministic) behavior and real-time task properties (e.g., sporadic/periodic activations with deadlines) for the processes and channels that are not necessarily FIFO's. These extensions are possible due to decoupling between the process blocking from the inter-process channel accesses. Our public design flow [14], [10] compiles FPPN's to executable component-based model with timed automata components. Timed automata is thus used as a 'meta-model' to define the semantics of FPPN and to provide a basis for simulation and deployment. Moreover, automata are useful means for adding the system middleware components that cannot be expressed in higher-level models of computation, such as run-time management, e.g., QoS control [1], and custom scheduling policies [13]. We demonstrate combining such automata with FPPN models in [14] and [12]. An instance of FPPN is composed of four main entities: Processes (tasks), Data Channels (communication buffers), Event Generators and Functional Priorities. The process network example in Fig. 1 represents an imaginary signal processing application with input sample period 200 ms, reconfigurable filter coefficients and a feedback loop. The filter coefficients are reconfigured by sporadic process CoefB. We see several periodic processes, annotated by their periods , and a sporadic process, annotated by minimal inter-arrival time. This process also has a non-default burstyness value m e = 2. We also see inter-process channels-the blackboards This research received funding from MoSaTT-CMP-European Space Agency project, and from CERTAINTY-European FP7 projec

    DOL-BIP-Critical: a tool chain for rigorous design and implementation of mixed-criticality multi-core systems

    No full text
    International audienceMixed-criticality systems are promoted in industry due to their potential to reduce size, weight, power, and cost. Nonetheless, deploying mixed-criticality applications on commercial multi-core platforms remains a highly challenging problem. To name a few reasons: (i) Industrial mixed-criticality applications are usually complex reactive applications, which cannot be specified by traditional, e.g., dataflow-based, models of computation. Appropriate mixed-criticality models of computation built upon Vestal's assumptions are missing; (ii) Scheduling such applications on multicores with shared resources, such as memory buses, requires that any timing interference among applications of different criticality is bounded in order to guarantee-the necessary for certification-temporal isolation and to enable incre-mental design; (iii) The implementation of isolation-preserving mixed-criticality schedulers is itself subject to certification. Hence, it needs to be not only efficient, but also provably correct. This paper proposes, for the first time, a complete design flow covering all aspects from specification, using a novel mixed-criticality aware model of computation (DOL-Critical), to correct-by-construction implementation, using the principle 'what you verify is what you generate' which is based on a novel variant of task automata (BIP). We demonstrate the applicability of our design flow with an industrial avionic test case on the state-of-the-art Kalray MPPA R-256
    corecore