11 research outputs found

    A Review of Priority Assignment in Real-Time Systems

    Get PDF
    It is over 40 years since the first seminal work on priority assignment for real-time systems using fixed priority scheduling. Since then, huge progress has been made in the field of real-time scheduling with more complex models and schedulability analysis techniques developed to better represent and analyse real systems. This tutorial style review provides an in-depth assessment of priority assignment techniques for hard real-time systems scheduled using fixed priorities. It examines the role and importance of priority in fixed priority scheduling in all of its guises, including: preemptive and non-pre-emptive scheduling; covering single- and multi-processor systems, and networks. A categorisation of optimal priority assignment techniques is given, along with the conditions on their applicability. We examine the extension of these techniques via sensitivity analysis to form robust priority assignment policies that can be used even when there is only partial information available about the system. The review covers priority assignment in a wide variety of settings including: mixed-criticality systems, systems with deferred pre-emption, and probabilistic real-time systems with worstcase execution times described by random variables. It concludes with a discussion of open problems in the area of priority assignment

    220606

    Get PDF
    To reduce the latency of time-sensitive flows in Ethernet networks, the IEEE TSN Task Group introduced the IEEE 802.1Qbu Standard, which specifies a 1-level preemption scheme for IEEE 802.1 networks. Recently, serious limitations of this scheme w.r.t. flows responsiveness were exposed and the so-called multi-level preemption approach was proposed to address these drawbacks. As is the case with most, if not all, real-time and/or time-sensitive preemptive systems, an appropriate priority-to-flow assignment policy plays a central role in the resulting performance of both 1-level and multi-level preemption schemes to avoid the over-provisioning and/or the sub-optimal use of hardware resources. Yet on another front, the multi-level preemption scheme raises new configuration challenges. Specifically, the right number of preemption level(s) to enable for swift transmission of flows; and the flow-to-preemption-class assignment synthesis remain open problems. To the best of our knowledge, there is no prior work in the literature addressing these important challenges. In this work, we address these three challenges. We demonstrate the applicability of our proposed solution by using both synthetic and real-life use-cases. Our experimental results show that multi-level preemption schemes improve the schedulability of flows by over 12% as compared to a 1-level preemption scheme, and at a higher abstraction level, the proposed configuration framework improves the schedulability of flows by up to 6% as compared to the dominant Deadline Monotonic Priority Ordering.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); also by FCT through the European Social Fund (ESF) and the Regional Operational Programme (ROP) Norte 2020, under grant 2020.09636.BD.info:eu-repo/semantics/publishedVersio

    Anomalies in Scheduling Control Applications and Design Complexity

    Get PDF
    Today, many control applications in cyberphysical systems are implemented on shared platforms. Such resource sharing may lead to complex timing behaviors and, in turn, instability of control applications. This paper highlights a number of anomalies demonstrating complex timing behaviors caused as a result of resource sharing. Such anomalous scenarios, then, lead to a dramatic increase in design complexity, if not properly considered. Here, we demonstrate that these anomalies are, in fact, very improbable. Therefore, design methodologies for these systems should mainly be devised and tuned towards the majority of cases, as opposed to anomalies, but should also be able to handle such anomalous scenarios

    Data-Age Analysis and Optimisation for Cause-Effect Chains in Automotive Control Systems

    Get PDF
    Automotive control systems typically have latency requirements for certain cause-effect chains. When implementing and integrating these systems, these latency requirements must be guaranteed e.g. by applying a worst-case analysis that takes all indeterminism and limited predictability of the timing behaviour into account. In this paper, we address the latency analysis for multi-rate distributed cause-effect chains considering staticpriority preemptive scheduling of offset-synchronised periodic tasks. We particularly focus on data age as one representative of the two most common latency semantics. Our main contribution is an Mixed Integer Linear Program-based optimisation to select design parameters (priorities, task-to-processor mapping, offsets) that minimise the data age. In our experimental evaluation, we apply our method to two real-world automotive use cases

    Application analyses of ultra-low-energy processor

    Get PDF
    Abstract. Low energy consumption has become a critical design feature in modern systems. Internet of Things, wearables and other portable devices create increasing demand for low power design where device size is dictated by battery and low energy means longer battery life and smaller physical size. These are crucial features for wearables and especially implantable medical devices. There are several low power and energy efficient techniques which are applied at different abstraction levels of the system design. A technique usually utilizing software control and hardware features is DVFS (dynamic voltage and frequency scaling), a dynamic power management technique which decreases processor clock frequency and supply voltage. Reduction in energy consumption is achieved with the cost of reduced performance. One of the questions with DVFS is how the execution frequencies are defined. This thesis presents a method for frequency optimization for applications executed on a single core processor. Execution trace data is used to profile the application. FreeRTOS operating system is used although tracing can be implemented with any real-time operating system executing tasks as separate threads. Based on profiling and user-defined data, task execution frequencies are defined assuming that execution time scales linearly with the frequency. A near-threshold ARM Cortex M3 with integrated power management and phase-locked loop is used for measurements. The measurements show that energy savings can be achieved without affecting correct application execution. However, the reduction in energy consumption depends highly on the system used and the application execution profile. Iterative testing and frequency optimization are required to ensure adequate performance. For energy efficiency optimization, energy consumption needs to be considered in every phase of the design.Matalan energiankulutuksen prosessorin sovellusanalyysi. Tiivistelmä. Matala energiankulutus on keskeinen ominaisuus nykyisten järjestelmien suunnittelussa. Esineiden Internet ja puettava tietotekniikka luovat tarpeen yhä pienemmälle energiankulutukselle. Laitteen koko määräytyy akun koon mukana. Matala tehonkulutus tarkoittaa pidempää akunkestoa ja pienempää fyysista kokoa. Nämä ovat ratkaisevia ominaisuuksia, erityisesti implantoitaville lääkinnällisille laitteille. Energiatehokkuuteen ja matalaan energiankulutukseen tähtääviä menetelmiä voidaan soveltaa eri abstraktiotasoilla järjestelmän suunnittelussa. Dynaaminen jännitteen ja taajuuden skaalaus on menetelmä, millä pyritään alentamaan dynaamista tehonkulutusta säätelemällä käyttöjännitettä ja kellotaajuutta. Suorituskyvyn kustannuksella on mahdollista saavuttaa matalampi energiankulutus. Keskeinen kysymys on, miten käytettävät kellotaajuudet tulee määritellä. Tässä diplomityössä kehitetään menetelmä, jota voidaan käyttää optimaalisten kellotaajuuksien määrittämiseen. Suorituksen aikana kerättävää dataa käytetään ohjelman profilointiin ja optimointimallin luomiseen. Suoritusdatan kerääminen on kehitetty FreeRTOS-käyttöjärjestelmälle, mutta periaate on sovellettavissa käyttöjärjestelmille, joissa tehtävät suoritetaan erillisissä prosesseissa. Profilointidata hyödynnetään yhdessä käyttäjän syöttämän data kanssa kellotaajuuksien määrittämiseen olettaen, että suoritusaika skaalautuu lineaarisesti kellotaajuden kanssa. Suositustaajuudet määritetään jokaiselle prosessille erikseen. Mittauksissa käytettiin ARM Cortex M3 prosessoria integroidulla tehonhallinnalla ja vaihelukolla. Mittaustulokset osoittavat, että energiankulutusta voidaan pienentää vaikuttamatta sovelluksen virheettömään suoritukseen. Saavutettava hyöty tehonkulutuksessa on riippuvainen käytettävästä järjestelmästä ja sovelluksen suoritusprofiilista. Riittävä suorituskyky täytyy varmistaa iteratiivisella testaamisella ja kellotaajuuksien optimoinnilla. Tehonkulutus ja energiatehokkuus täytyy huomioida suunnitteluprosessin jokaisella osa-alueella, jotta parhaat tulokset saavutetaan

    Compensating Adaptive Mixed Criticality Scheduling

    Get PDF
    The majority of prior academic research into mixed criticality systems assumes that if high-criticality tasks continue to execute beyond the execution time limits at which they would normally finish, then further workload due to low-criticality tasks may be dropped in order to ensure that the high-criticality tasks can still meet their deadlines. Industry, however, takes a different view of the importance of low-criticality tasks, with many practical systems unable to tolerate the abandonment of such tasks. In this paper, we address the challenge of supporting genuinely graceful degradation in mixed criticality systems, thus avoiding the abandonment problem. We explore the Compensating Adaptive Mixed Criticality (C-AMC) scheduling scheme. C-AMC ensures that both high- and low-criticality tasks meet their deadlines in both normal and degraded modes. Under C-AMC, jobs of low-criticality tasks, released in degraded mode, execute imprecise versions that provide essential functionality and outputs of sufficient quality, while also reducing the overall workload. This compensates, at least in part, for the overload due to the abnormal behavior of high-criticality tasks. C-AMC is based on fixed-priority preemptive scheduling and hence provides a viable migration path along which industry can make an evolutionary transition from current practice
    corecore