1,513 research outputs found

    Precise energy efficient scheduling of mixed-criticality tasks & sustainable mixed-criticality scheduling

    Get PDF
    In this thesis, the imprecise mixed-criticality model (IMC) is extended to precise scheduling of tasks, and integrated with the dynamic voltage and frequency scaling (DVFS) technique to enable energy minimization. The challenge in precise scheduling of MC systems is to simultaneously guarantee the timing correctness for all tasks, hi and lo, under both pessimistic and optimistic (less pessimistic) assumptions. To the best of knowledge this is the first work to address the integration of DVFS energy conserving techniques with precise scheduling of lo-tasks of the MC model. In this thesis, the utilization based schedulability tests and sufficient conditions for such systems under Earliest Deadline First EDF-VD scheduling policy are presented. Quantitative study in the forms of speedup bound and approximation ratio are also proved for the unified model. Extensive experimental studies are conducted to verify the theoretical results as well as the effectiveness of the proposed algorithm. In safety- critical systems, it is essential to perform schedulability analysis prior to run-time. Parameters characterizing the run-time workload are generated by pessimistic techniques; hence, adopting conservative estimates may result in systems performing much better than anticipated during run-time. This thesis also addresses the following questions associated to the better performance of the task system: (i) How does parameter change affect the schedulability of a task set (system)? (ii) In the event that a mixed-criticality system design is deemed schedulable and specific part/parts of the system are reassigned to be of low-criticality, is the system still safe to run? (iii) If a system is presumed to be non-schedulable, does it invariably benefit to reduce the criticality of some task? To answer these questions, in this thesis, we not only study the property of sustainability with regards to criticality levels, but also revisit sustainability of several uniprocessor and multiprocessor scheduling policies with respect to other parameters --Abstract, page iii

    Precise Scheduling of DAG Tasks with Dynamic Power Management

    Get PDF

    Using Imprecise Computing for Improved Real-Time Scheduling

    Get PDF
    Conventional hard real-time scheduling is often overly pessimistic due to the worst case execution time estimation. The pessimism can be mitigated by exploiting imprecise computing in applications where occasional small errors are acceptable. This leverage is investigated in a few previous works, which are restricted to preemptive cases. We study how to make use of imprecise computing in uniprocessor non-preemptive real-time scheduling, which is known to be more difficult than its preemptive counterpart. Several heuristic algorithms are developed for periodic tasks with independent or cumulative errors due to imprecision. Simulation results show that the proposed techniques can significantly improve task schedulability and achieve desired accuracy– schedulability tradeoff. The benefit of considering imprecise computing is further confirmed by a prototyping implementation in Linux system. Mixed-criticality system is a popular model for reducing pessimism in real-time scheduling while providing guarantee for critical tasks in presence of unexpected overrun. However, it is controversial due to some drawbacks. First, all low-criticality tasks are dropped in high-criticality mode, although they are still needed. Second, a single high-criticality job overrun leads to the pessimistic high-criticality mode for all high-criticality tasks and consequently resource utilization becomes inefficient. We attempt to tackle aforementioned two limitations of mixed-criticality system simultaneously in multiprocessor scheduling, while those two issues are mostly focused on uniprocessor scheduling in several recent works. We study how to achieve graceful degradation of low-criticality tasks by continuing their executions with imprecise computing or even precise computing if there is sufficient utilization slack. Schedulability conditions under this Variable-Precision Mixed-Criticality (VPMC) system model are investigated for partitioned scheduling and global fpEDF-VD scheduling. And a deferred switching protocol is introduced so that the chance of switching to high-criticality mode is significantly reduced. Moreover, we develop a precision optimization approach that maximizes precise computing of low-criticality tasks through 0-1 knapsack formulation. Experiments are performed through both software simulations and Linux proto- typing with consideration of overhead. Schedulability of the proposed methods is studied so that the Quality-of-Service for low-criticality tasks is improved with guarantee of satisfying all deadline constraints. The proposed precision optimization can largely reduce computing errors compared to constantly executing low-criticality tasks with imprecise computing in high-criticality mode

    Preemptive uniprocessor scheduling of dual-criticality implicit-deadline sporadic tasks

    Get PDF
    Many reactive systems must be designed and analyzed prior to deployment in the presence of considerable epistemic uncertainty: the precise nature of the external environment the system will encounter, as well as the run-time behavior of the platform upon which it is implemented, cannot be predicted with complete certainty prior to deployment. The widely-studied Vestal model for mixed-criticality workloads addresses uncertainties in estimating the worst-case execution time (WCET) of real-time code. Different estimations, at different levels of assurance, are made about these WCET values; it is required that all functionalities execute correctly if the less conservative assumptions hold, while only the more critical functionalities are required to execute correctly in the (presumably less likely) event that the less conservative assumptions fail to hold but the more conservative assumptions do. A generalization of the Vestal model is considered here, in which a degraded (but non-zero) level of service is required for the less critical functionalities even in the event of only the more conservative assumptions holding. An algorithm is derived for scheduling dual-criticality implicit-deadline sporadic task systems specified in this more general model upon preemptive uniprocessor platforms, and proved to be speedup-optimal

    CSP channels for CAN-bus connected embedded control systems

    Get PDF
    Closed loop control system typically contains multitude of sensors and actuators operated simultaneously. So they are parallel and distributed in its essence. But when mapping this parallelism to software, lot of obstacles concerning multithreading communication and synchronization issues arise. To overcome this problem, the CT kernel/library based on CSP algebra has been developed. This project (TES.5410) is about developing communication extension to the CT library to make it applicable in distributed systems. Since the library is tailored for control systems, properties and requirements of control systems are taken into special consideration. Applicability of existing middleware solutions is examined. A comparison of applicable fieldbus protocols is done in order to determine most suitable ones and CAN fieldbus is chosen to be first fieldbus used. Brief overview of CSP and existing CSP based libraries is given. Middleware architecture is proposed along with few novel ideas

    Software Fault Tolerance in Real-Time Systems: Identifying the Future Research Questions

    Get PDF
    Tolerating hardware faults in modern architectures is becoming a prominent problem due to the miniaturization of the hardware components, their increasing complexity, and the necessity to reduce the costs. Software-Implemented Hardware Fault Tolerance approaches have been developed to improve the system dependability to hardware faults without resorting to custom hardware solutions. However, these come at the expense of making the satisfaction of the timing constraints of the applications/activities harder from a scheduling standpoint. This paper surveys the current state of the art of fault tolerance approaches when used in the context real-time systems, identifying the main challenges and the cross-links between these two topics. We propose a joint scheduling-failure analysis model that highlights the formal interactions among software fault tolerance mechanisms and timing properties. This model allows us to present and discuss many open research questions with the final aim to spur the future research activities

    Energy-Aware Real-Time Scheduling on Heterogeneous and Homogeneous Platforms in the Era of Parallel Computing

    Get PDF
    Multi-core processors increasingly appear as an enabling platform for embedded systems, e.g., mobile phones, tablets, computerized numerical controls, etc. The parallel task model, where a task can execute on multiple cores simultaneously, can efficiently exploit the multi-core platform\u27s computational ability. Many computation-intensive systems (e.g., self-driving cars) that demand stringent timing requirements often evolve in the form of parallel tasks. Several real-time embedded system applications demand predictable timing behavior and satisfy other system constraints, such as energy consumption. Motivated by the facts mentioned above, this thesis studies the approach to integrating the dynamic voltage and frequency scaling (DVFS) policy with real-time embedded system application\u27s internal parallelism to reduce the worst-case energy consumption (WCEC), an essential requirement for energy-constrained systems. First, we propose an energy-sub-optimal scheduler, assuming the per-core speed tuning feature for each processor. Then we extend our solution to adapt the clustered multi-core platform, where at any given time, all the processors in the same cluster run at the same speed. We also present an analysis to exploit a task\u27s probabilistic information to improve the average-case energy consumption (ACEC), a common non-functional requirement of embedded systems. Due to the strict requirement of temporal correctness, the majority of the real-time system analysis considered the worst-case scenario, leading to resource over-provisioning and cost. The mixed-criticality (MC) framework was proposed to minimize energy consumption and resource over-provisioning. MC scheduling has received considerable attention from the real-time system research community, as it is crucial to designing safety-critical real-time systems. This thesis further addresses energy-aware scheduling of real-time tasks in an MC platform, where tasks with varying criticality levels (i.e., importance) are integrated into a common platform. We propose an algorithm GEDF-VD for scheduling MC tasks with internal parallelism in a multiprocessor platform. We also prove the correctness of GEDF-VD, provide a detailed quantitative evaluation, and reported extensive experimental results. Finally, we present an analysis to exploit a task\u27s probabilistic information at their respective criticality levels. Our proposed approach reduces the average-case energy consumption while satisfying the worst-case timing requirement

    Parallel Real-Time Scheduling for Latency-Critical Applications

    Get PDF
    In order to provide safety guarantees or quality of service guarantees, many of today\u27s systems consist of latency-critical applications, e.g. applications with timing constraints. The problem of scheduling multiple latency-critical jobs on a multiprocessor or multicore machine has been extensively studied for sequential (non-parallizable) jobs and different system models and different objectives have been considered. However, the computational requirement of a single job is still limited by the capacity of a single core. To provide increasingly complex functionalities of applications and to complete their higher computational demands within the same or even more stringent timing constraints, we must exploit the internal parallelism of jobs, where individual jobs are parallel programs and can potentially utilize more than one core in parallel. However, there is little work considering scheduling multiple parallel jobs that are latency-critical. This dissertation focuses on developing new scheduling strategies, analysis tools, and practical platform design techniques to enable efficient and scalable parallel real-time scheduling for latency-critical applications on multicore systems. In particular, the research is focused on two types of systems: (1) static real-time systems for tasks with deadlines where the temporal properties of the tasks that need to execute is known a priori and the goal is to guarantee the temporal correctness of the tasks prior to their executions; and (2) online systems for latency-critical jobs where multiple jobs arrive over time and the goal to optimize for a performance objective of jobs during the execution. For static real-time systems for parallel tasks, several scheduling strategies, including global earliest deadline first, global rate monotonic and a novel federated scheduling, are proposed, analyzed and implemented. These scheduling strategies have the best known theoretical performance for parallel real-time tasks under any global strategy, any fixed priority scheduling and any scheduling strategy, respectively. In addition, federated scheduling is generalized to systems with multiple criticality levels and systems with stochastic tasks. Both numerical and empirical experiments show that federated scheduling and its variations have good schedulability performance and are efficient in practice. For online systems with multiple latency-critical jobs, different online scheduling strategies are proposed and analyzed for different objectives, including maximizing the number of jobs meeting a target latency, maximizing the profit of jobs, minimizing the maximum latency and minimizing the average latency. For example, a simple First-In-First-Out scheduler is proven to be scalable for minimizing the maximum latency. Based on this theoretical intuition, a more practical work-stealing scheduler is developed, analyzed and implemented. Empirical evaluations indicate that, on both real world and synthetic workloads, this work-stealing implementation performs almost as well as an optimal scheduler

    Mixed-Criticality Scheduling to Minimize Makespan

    Get PDF
    In the mixed-criticality job model, each job is characterized by two execution time parameters, representing a smaller (less conservative) estimate and a larger (more conservative) estimate on its actual, unknown, execution time. Each job is further classified as being either less critical or more critical. The desired execution semantics are that all jobs should execute correctly provided all jobs complete upon being allowed to execute for up to the smaller of their execution time estimates, whereas if some jobs need to execute beyond their smaller execution time estimates (but not beyond their larger execution time estimates), then only the jobs classified as being more critical are required to execute correctly. The scheduling of collections of such mixed-criticality jobs upon identical multiprocessor platforms in order to minimize the makespan is considered here

    Concurrency Platforms for Real-Time and Cyber-Physical Systems

    Get PDF
    Parallel processing is an important way to satisfy the increasingly demanding computational needs of modern real-time and cyber-physical systems, but existing parallel computing technologies primarily emphasize high-throughput and average-case performance metrics, which are largely unsuitable for direct application to real-time, safety-critical contexts. This work contrasts two concurrency platforms designed to achieve predictable worst case parallel performance for soft real-time workloads with millisecond periods and higher. One of these is then the basis for the CyberMech platform, which enables parallel real-time computing for a novel yet representative application called Real-Time Hybrid Simulation (RTHS). RTHS combines demanding parallel real-time computation with real-time simulation and control in an earthquake engineering laboratory environment, and results concerning RTHS characterize a reasonably comprehensive survey of parallel real-time computing in the static context, where the size, shape, timing constraints, and computational requirements of workloads are fixed prior to system runtime. Collectively, these contributions constitute the first published implementations and evaluations of general-purpose concurrency platforms for real-time and cyber-physical systems, explore two fundamentally different design spaces for such systems, and successfully demonstrate the utility and tradeoffs of parallel computing for statically determined real-time and cyber-physical systems
    • …
    corecore