582 research outputs found

    Utilization-Based Scheduling of Flexible Mixed-Criticality Real-Time Tasks

    Get PDF
    Mixed-criticality models are an emerging paradigm for the design of real-time systems because of their significantly improved resource efficiency. However, formal mixed-criticality models have traditionally been characterized by two impractical assumptions: once \textit{any} high-criticality task overruns, \textit{all} low-criticality tasks are suspended and \textit{all other} high-criticality tasks are assumed to exhibit high-criticality behaviors at the same time. In this paper, we propose a more realistic mixed-criticality model, called the flexible mixed-criticality (FMC) model, in which these two issues are addressed in a combined manner. In this new model, only the overrun task itself is assumed to exhibit high-criticality behavior, while other high-criticality tasks remain in the same mode as before. The guaranteed service levels of low-criticality tasks are gracefully degraded with the overruns of high-criticality tasks. We derive a utilization-based technique to analyze the schedulability of this new mixed-criticality model under EDF-VD scheduling. During runtime, the proposed test condition serves an important criterion for dynamic service level tuning, by means of which the maximum available execution budget for low-criticality tasks can be directly determined with minimal overhead while guaranteeing mixed-criticality schedulability. Experiments demonstrate the effectiveness of the FMC scheme compared with state-of-the-art techniques.Comment: This paper has been submitted to IEEE Transaction on Computers (TC) on Sept-09th-201

    A toolset for the development of mixed-criticality partitioned systems

    Get PDF
    The development of mixed-criticality virtualized multi-core systems poses new challenges that are being subject of active research work. There is an additional complexity: it is now required to identify a set of partitions, and allocate applications to partitions. In this job, a number of issues have to be considered, such as the criticality level of the application, security and dependability requirements, time requirements granularity, etc. MultiPARTES [11] toolset relies on Model Driven Engineering (MDE), which is a suitable approach in this setting, as it helps to bridge the gap between design issues and partitioning concerns. MDE is changing the way systems are developed nowadays, reducing development time. In general, modelling approaches have shown their benefits when applied to embedded systems. These benefits have been achieved by fostering reuse with an intensive use of abstractions, or automating the generation of boiler-plate code

    Secure Virtualization of Latency-Constrained Systems

    Get PDF
    Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight. In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach. Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism

    A Survey of Research into Mixed Criticality Systems

    Get PDF
    This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards

    Combining Task-level and System-level Scheduling Modes for Mixed Criticality Systems

    Get PDF
    Different scheduling algorithms for mixed criticality systems have been recently proposed. The common denominator of these algorithms is to discard low critical tasks whenever high critical tasks are in lack of computation resources. This is achieved upon a switch of the scheduling mode from Normal to Critical. We distinguish two main categories of the algorithms: system-level mode switch and task-level mode switch. System-level mode algorithms allow low criticality (LC) tasks to execute only in normal mode. Task-level mode switch algorithms enable to switch the mode of an individual high criticality task (HC), from low (LO) to high (HI), to obtain priority over all LC tasks. This paper investigates an online scheduling algorithm for mixed-criticality systems that supports dynamic mode switches for both task level and system level. When a HC task job overruns its LC budget, then only that particular job is switched to HI mode. If the job cannot be accommodated, then the system switches to Critical mode. To accommodate for resource availability of the HC jobs, the LC tasks are degraded by stretching their periods until the Critical mode exhibiting job complete its execution. The stretching will be carried out until the resource availability is met. We have mechanized and implemented the proposed algorithm using Uppaal. To study the efficiency of our scheduling algorithm, we examine a case study and compare our results to the state of the art algorithms.Comment: \copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    A Roadmap for Acquisition of Legacy Parts Through an On-demand Solution Aimed at the Energy Sector on the Norwegian Continental Shelf - A Case Implementation

    Get PDF
    \section{Abstract} Equinor has initiated a Field Life Extension (FLX) project to prolong the end-life operational capabilities of their installations by innovative methods, including Stafjord A. One of these innovative methods is to implement an on-demand solution for re-supplying the installation with spare parts manufactured through alternative methods, such as additive manufacturing (AM) and rapid casting. However, due to the age of specific components, the documentation for design, material specification, and manufacturing may be missing, i.e., legacy parts. The main aim of this thesis is to map the path from notification of a potential failure of a legacy part to the installation of a near-identical part. The life extension implies that mechanical equipment, such as valve bodies for the fire deluge systems must maintain their integrity throughout the expanded life cycle. Unfortunately, this component has exceeded its life expectancy by twice. Hence, increased degradation and risk for potential accidents introduce the need for acquiring new valve bodies. A literature review investigated the challenges and requirements for implementing the on-demand solution for legacy parts. Standards and manufacturing methods have been studied and compared. An Analytical Hierarchy Process was used to analyze the input from experts within AM and rapid casting. Finally, a case review processed the valve body through the Reverse Engineering Process (REP) activities. A roadmap is proposed based on regulations governing the manufacturing of mechanical components used on the Norwegian Continental Shelf (NCS). Furthermore, requirements for implementing the on-demand solution for legacy parts are described, including a proposition for an explicit criticality assessment for metal AM. A recommendation for operational part-monitoring and identification linked with a digital warehouse of the corresponding part is made to finalize the proposed roadmap for acquiring legacy parts on the NCS. The Analytical hierarchy process (AHP) reveals that rapid casting outperforms metal AM for valve body manufacturing. In addition, metal AM and rapid casting are benchmarked regarding realistic cost and lead time procurement limitations. The results include the AHP output and indicate that the cost of ordering the valve body favour rapid casting, but the lead time for metal AM is lower than rapid casting. The total cost for metal AM per part is nearly equal to the cost of the initial requested batch of 26 valve bodies produced by rapid casting
    • …
    corecore