479 research outputs found

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    Get PDF
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems

    An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real-Time Systems

    Full text link
    We show a methodology for the computation of the probability of deadline miss for a periodic real-time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade-offs can be obtained by operating on the granularity of the model. More importantly we offer a closed form conservative bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability in one real-time application of practical interest. When this bound is used for the optimisation of the overall Quality of Service for a set of tasks sharing the CPU, it produces a good sub-optimal solution in a small amount of time.Comment: IEEE Transactions on Parallel and Distributed Systems, Volume:27, Issue: 3, March 201

    Rate Monotonic vs. EDF: Judgment Day

    Get PDF
    Since the first results published in 1973 by Liu and Layland on the Rate Monotonic (RM) and Earliest Deadline First (EDF) algorithms, a lot of progress has been made in the schedulability analysis of periodic task sets. Unfortunately, many misconceptions still exist about the properties of these two scheduling methods, which usually tend to favor RMmore than EDF. Typical wrong statements often heard in technical conferences and even in research papers claim that RM is easier to analyze than EDF, it introduces less runtime overhead, it is more predictable in overload conditions, and causes less jitter in task execution. Since the above statements are either wrong, or not precise, it is time to clarify these issues in a systematic fashion, because the use of EDF allows a better exploitation of the available resources and significantly improves system’s performance. This paper comparesRMagainstEDFunder several aspects, using existing theoretical results, specific simulation experiments, or simple counterexamples to show that many common beliefs are either false or only restricted to specific situations

    A Real-time Calculus Approach for Integrating Sporadic Events in Time-triggered Systems

    Full text link
    In time-triggered systems, where the schedule table is predefined and statically configured at design time, sporadic event-triggered (ET) tasks can only be handled within specially dedicated slots or when time-triggered (TT) tasks finish their execution early. We introduce a new paradigm for synthesizing TT schedules that guarantee the correct temporal behavior of TT tasks and the schedulability of sporadic ET tasks with arbitrary deadlines. The approach first expresses a constraint for the TT task schedule in the form of a maximal affine envelope that guarantees that as long as the schedule generation respects this envelope, all sporadic ET tasks meet their deadline. The second step consists of modeling this envelope as a burst limiting constraint and building the TT schedule via simulating a modified Least-Laxity-First (LLF) scheduler. Using this novel technique, we show that we achieve equal or better schedulability and a faster schedule generation for most use-cases compared to other approaches inspired by, e.g., hierarchical scheduling. Moreover, we present an extension to our method that finds the most favourable schedule for TT tasks with respect to ET schedulability, thus increasing the probability of the computed TT schedule remaining feasible when ET tasks are later added or changed

    Capacity sharing and stealing in serverbased real-time systems

    Get PDF
    A dynamic scheduler that supports the coexistence of guaranteed and non-guaranteed bandwidth servers is proposed. Overloads are handled by an efficient reclaiming of residual capacities originated by early completions as well as by allowing reserved capacity stealing of non-guaranteed bandwidth servers. The proposed dynamic budget accounting mechanism ensures that at a particular time the currently executing server is using a residual capacity, its own capacity or is stealing some reserved capacity, eliminating the need of additional server states or unbounded queues. The server to which the budget accounting is going to be performed is dynamically determined at the time instant when a capacity is needed. This paper describes and evaluates the proposed scheduling algorithm, showing that it can efficiently reduce the mean tardiness of periodic jobs. The achieved results become even more significant when tasks’ computation times have a large variance

    Rate Monotonic vs. EDF: Judgment Day

    Full text link

    The Control Server Model for Co-Design of Real-Time Control Systems

    Get PDF
    The paper presents the control server, a real-time scheduling mechanism tailored to control and signal processing applications. A control server creates the abstraction of a control task with a specified period and a fixed input-output latency shorter than the period. Individual tasks can be combined into more complex components without loss of their individual guaranteed fixed-latency properties. I/O occurs at fixed predefined points in time, at which inputs are read or controller outputs become visible. The control server model is especially suited for codesign of real-time control systems. The single parameter linking the scheduling design and the controller design is the task utilization factor. The proposed server is an extension of the constant bandwidth server, which is based on the earliest-deadline-first scheduling algorithm. The server has been implemented in a real-time kernel and has also been validated in control experiments on a ball and beam process

    MARACAS: a real-time multicore VCPU scheduling framework

    Full text link
    This paper describes a multicore scheduling and load-balancing framework called MARACAS, to address shared cache and memory bus contention. It builds upon prior work centered around the concept of virtual CPU (VCPU) scheduling. Threads are associated with VCPUs that have periodically replenished time budgets. VCPUs are guaranteed to receive their periodic budgets even if they are migrated between cores. A load balancing algorithm ensures VCPUs are mapped to cores to fairly distribute surplus CPU cycles, after ensuring VCPU timing guarantees. MARACAS uses surplus cycles to throttle the execution of threads running on specific cores when memory contention exceeds a certain threshold. This enables threads on other cores to make better progress without interference from co-runners. Our scheduling framework features a novel memory-aware scheduling approach that uses performance counters to derive an average memory request latency. We show that latency-based memory throttling is more effective than rate-based memory access control in reducing bus contention. MARACAS also supports cache-aware scheduling and migration using page recoloring to improve performance isolation amongst VCPUs. Experiments show how MARACAS reduces multicore resource contention, leading to improved task progress.http://www.cs.bu.edu/fac/richwest/papers/rtss_2016.pdfAccepted manuscrip
    • …
    corecore