5 research outputs found

    A New Perspective on Criticality: Efficient State Abstraction and Run-Time Monitoring of Mixed-Criticality Real-Time Control Systems

    Get PDF

    Distributed computational model for shared processing on Cyber-Physical System environments

    Get PDF
    Cyber-Physical Systems typically consist of a combination of mobile devices, embedded systems and computers to monitor, sense, and actuate with the surrounding real world. These computing elements are usually wireless, interconnected to share data and interact with each other, with the server part and also with cloud computing services. In such a heterogeneous environment, new applications arise to meet ever-increasing needs and these are an important challenge to the processing capabilities of devices. For example, automatic driving systems, manufacturing environments, smart city management, etc. To meet the requirements of said application contexts, the system can create computing processes to distribute the workload over the network and/or a cloud computing server. Multiple options arise in relation to what network nodes should support the execution of the processes. This paper focuses on this problem by introducing a distributed computational model to dynamically share these tasks among the computing nodes and considering the inherent variability of the context in these environments. Our novel approach promotes the integration of the computing resources, with externally supplied cloud services, to fulfill modern application requirements. A prototype implementation for the proposed model has been built and an application example has been designed to validate the proposal in a real working environment

    Semantics-preserving cosynthesis of cyber-physical systems

    Get PDF

    Adaptive Mid-term and Short-term Scheduling of Mixed-criticality Systems

    Get PDF
    A mixed-criticality real-time system is a real-time system having multiple tasks classified according to their criticality. Research on mixed-criticality systems started to provide an effective and cost efficient a priori verification process for safety critical systems. The higher the criticality of a task within a system and the more the system should guarantee the required level of service for it. However, such model poses new challenges with respect to scheduling and fault tolerance within real-time systems. Currently, mixed-criticality scheduling protocols severely degrade lower criticality tasks in case of resource shortage to provide the required level of service for the most critical ones. The actual research challenge in this field is to devise robust scheduling protocols to minimise the impact on less critical tasks. This dissertation introduces two approaches, one short-term and the other medium-term, to appropriately allocate computing resources to tasks within mixed-criticality systems both on uniprocessor and multiprocessor systems. The short-term strategy consists of a protocol named Lazy Bailout Protocol (LBP) to schedule mixed-criticality task sets on single core architectures. Scheduling decisions are made about tasks that are active in the ready queue and that have to be dispatched to the CPU. LBP minimises the service degradation for lower criticality tasks by providing to them a background execution during the system idle time. After, I refined LBP with variants that aim to further increase the service level provided for lower criticality tasks. However, this is achieved at an increased cost of either system offline analysis or complexity at runtime. The second approach, named Adaptive Tolerance-based Mixed-criticality Protocol (ATMP), decides at runtime which task has to be allocated to the active cores according to the available resources. ATMP permits to optimise the overall system utility by tuning the system workload in case of shortage of computing capacity at runtime. Unlike the majority of current mixed-criticality approaches, ATMP allows to smoothly degrade also higher criticality tasks to keep allocated lower criticality ones

    Multi-layered scheduling of mixed-criticality cyber-physical systems

    No full text
    In this paper, we deal with the schedule synthesis problem of mixed-criticality cyber-physical systems (MCCPS), which are composed of hard real-time tasks and feedback control tasks. The real-time tasks are associated with deadlines that must always be satisfied whereas feedback control tasks are characterized by their Quality of Control (QoC) which needs to be optimized. A straight-forward approach to the above scheduling problem is to translate the QoC requirements into deadline constraints and then, to apply traditional real-time scheduling techniques such as Deadline Monotonic (DM). In this work, we show that such scheduling leads to overly conservative results and hence is not efficient in the above context. On the other hand, methods from the mixed-criticality systems (MC) literature mainly focus on tasks with different criticality levels and certification issues. However, in MCCPS, the tasks may not be fully characterized by only criticality levels, but they may further be classified according to their criticality types, e.g., deadline-critical real-time tasks and QoC-critical feedback control tasks. On the contrary to traditional deadline-driven scheduling, scheduling MCCPS requires to integrate both, deadline-driven and QoC-driven techniques which gives rise to a challenging scheduling problem. In this paper, we present a multi-layered schedule synthesis scheme for MCCPS that aims to jointly schedule deadline-critical, and QoC-critical tasks at different scheduling layers. Our scheduling framework (i) integrates a number of QoC-oriented metrics to capture the QoC requirements in the schedule synthesis (ii) uses arrival curves from real-time calculus which allow a general characterization of task triggering patterns compared to simple task models such as periodic or sporadic, and (iii) has pseudo-polynomial complexity. Finally, we show the applicability of our scheduling scheme by a number of experiments
    corecore