3,707 research outputs found

    Real-time scheduling of a tertiary-storage juke-box

    Get PDF
    We present a jukebox scheduler for real-time data. The scheduler is part of a hierarchical real-time file system to be used over a network. A jukebox is a large tertiary storage device whose removable media (e.g. cd-rom, dvd-rom) are loaded and unloaded from one or more drives by a robot. The problem with tertiary storage is that media exchange times are high and the number of drives is limited. This makes scheduling tertiary storage complicated. The storage media switching time in a jukebox is in the order of tens of seconds. Therefore multiplexing between two files stored in different media is many orders of magnitude slower than doing the same in secondary storage. The goal of the scheduler is to schedule the use of the jukebox devices (arm and drives) in such a way that the system can guarantee the deadlines while minimizing the response time. The problem is similar to that of scheduling multiple processors with the additional difficulty of having to deal with the high switching times and the use of a shared resource (the arm). Finding an optimal schedule is an NP-hard problem. We provide a near-optimal polynomial solution by using heuristics to prune the tree of solutions. The scheduling time is in average less than 100 ms. The incoming requests are scheduled on-line

    On the periodic behavior of real-time schedulers on identical multiprocessor platforms

    Full text link
    This paper is proposing a general periodicity result concerning any deterministic and memoryless scheduling algorithm (including non-work-conserving algorithms), for any context, on identical multiprocessor platforms. By context we mean the hardware architecture (uniprocessor, multicore), as well as task constraints like critical sections, precedence constraints, self-suspension, etc. Since the result is based only on the releases and deadlines, it is independent from any other parameter. Note that we do not claim that the given interval is minimal, but it is an upper bound for any cycle of any feasible schedule provided by any deterministic and memoryless scheduler

    k2U: A General Framework from k-Point Effective Schedulability Analysis to Utilization-Based Tests

    Full text link
    To deal with a large variety of workloads in different application domains in real-time embedded systems, a number of expressive task models have been developed. For each individual task model, researchers tend to develop different types of techniques for deriving schedulability tests with different computation complexity and performance. In this paper, we present a general schedulability analysis framework, namely the k2U framework, that can be potentially applied to analyze a large set of real-time task models under any fixed-priority scheduling algorithm, on both uniprocessor and multiprocessor scheduling. The key to k2U is a k-point effective schedulability test, which can be viewed as a "blackbox" interface. For any task model, if a corresponding k-point effective schedulability test can be constructed, then a sufficient utilization-based test can be automatically derived. We show the generality of k2U by applying it to different task models, which results in new and improved tests compared to the state-of-the-art. Analogously, a similar concept by testing only k points with a different formulation has been studied by us in another framework, called k2Q, which provides quadratic bounds or utilization bounds based on a different formulation of schedulability test. With the quadratic and hyperbolic forms, k2Q and k2U frameworks can be used to provide many quantitive features to be measured, like the total utilization bounds, speed-up factors, etc., not only for uniprocessor scheduling but also for multiprocessor scheduling. These frameworks can be viewed as a "blackbox" interface for schedulability tests and response-time analysis

    TimeTrader: Exploiting Latency Tail to Save Datacenter Energy for On-line Data-Intensive Applications

    Get PDF
    Datacenters running on-line, data-intensive applications (OLDIs) consume significant amounts of energy. However, reducing their energy is challenging due to their tight response time requirements. A key aspect of OLDIs is that each user query goes to all or many of the nodes in the cluster, so that the overall time budget is dictated by the tail of the replies' latency distribution; replies see latency variations both in the network and compute. Previous work proposes to achieve load-proportional energy by slowing down the computation at lower datacenter loads based directly on response times (i.e., at lower loads, the proposal exploits the average slack in the time budget provisioned for the peak load). In contrast, we propose TimeTrader to reduce energy by exploiting the latency slack in the sub- critical replies which arrive before the deadline (e.g., 80% of replies are 3-4x faster than the tail). This slack is present at all loads and subsumes the previous work's load-related slack. While the previous work shifts the leaves' response time distribution to consume the slack at lower loads, TimeTrader reshapes the distribution at all loads by slowing down individual sub-critical nodes without increasing missed deadlines. TimeTrader exploits slack in both the network and compute budgets. Further, TimeTrader leverages Earliest Deadline First scheduling to largely decouple critical requests from the queuing delays of sub- critical requests which can then be slowed down without hurting critical requests. A combination of real-system measurements and at-scale simulations shows that without adding to missed deadlines, TimeTrader saves 15-19% and 41-49% energy at 90% and 30% loading, respectively, in a datacenter with 512 nodes, whereas previous work saves 0% and 31-37%.Comment: 13 page
    corecore