18 research outputs found

    Exploitation de la variabilité des tâches pour minimiser la consommation d'énergie sous des contraintes temps-réels

    Get PDF
    This paper proposes a Markov Decision Process (MDP) approach to compute the optimal on-line speed scaling policy that minimizes the energy consumption of a single processor executing a finite or infinite set of jobs with real-time constraints, in the non-clairvoyant case,i.e., when the actual execution time of the jobs is unknown when they are released. In real life applications, it is common at release time to know only the Worst-Case Execution Time of a job, and the actual execution time of this job is only discovered when it finishes. Choosing the processor speed purely in function of the Worst-Case Execution Time is sub-optimal. When the probability distribution of the actual execution time is known, it is possible to exploit this knowledge to choose a lower processor speed so as to minimize the expected energy consumption (while still guaranteeing that all jobs meet their deadline). Our MDP solution solves this problem optimally with discrete processor speeds. Compared with approaches from the literature, the gain offered by the new policy ranges from a few percent when the variability of job characteristics is small, tomore than 50%when the job execution time distributions are far from their worst case

    A Pseudo-Linear Time Algorithm for the Optimal Discrete Speed Minimizing Energy Consumption

    Get PDF
    International audienceWe consider the classical problem of minimizing off-line the total energy consumption required to execute a set of n real-time jobs on a single processor with a finite number of available speeds. Each real-time job is defined by its release time, size, and deadline (all bounded integers). The goal is to find a processor speed schedule, such that no job misses its deadline and the energy consumption is minimal. We propose a pseudo-linear time algorithm that checks the schedulability of the given set of n jobs and computes an optimal speed schedule. The time complexity of our algorithm is in O(n), to be compared with O(nlog(n)) for the best known solution. Besides the complexity gain, the main interest of our algorithm is that it is based on a completely different idea: instead of computing the critical intervals, it sweeps the set of jobs and uses a dynamic programming approach to compute an optimal speed schedule. Our linear time algorithm is still valid (with some changes) when arbitrary (non-convex) power functions and when switching costs are taken into account

    Dynamic Speed Scaling Minimizing Expected Energy Consumption for Real-Time Tasks

    Get PDF
    International audienceThis paper proposes a Discrete Time Markov Decision Process (MDP) approach to compute the optimal on-line speed scaling policy to minimize the energy consumption of a single processor executing a finite or infinite set of jobs with real-time constraints. We provide several qualitative properties of the optimal policy: monotonicity with respect to the jobs parameters, comparison with on-line de-terministic algorithms. Numerical experiments in several scenarios show that our proposition performs well when compared with off-line optimal solutions and out-performs on-line solutions oblivious to statistical information on the jobs

    Un processus à décision de Markov en temps discret pour minimiser l'énergie sous des contraintes d'échéances

    Get PDF
    This paper proposes a Discrete Time Markov Decision Process (MDP) approach to compute the optimal on-line speed scaling policy to minimize the energy consumption of a single processor executing a finite or infinite set of jobs with real-time constraints. We provide several qualitative properties of the optimal policy: monotonicity with respect to the jobs parameters, comparison with on-line deterministic algorithms. Numerical experiments in several scenarios show that our proposition performs well when compared with off-line optimal solutions and out-performs on-line solutions oblivious to statistical information on the jobs

    Faisabilité des politiques en-ligne dans les systèmes temps-réel

    Get PDF
    We consider a real-time system where a single processor with variable speed executes an infinite sequence of sporadic and independent jobs. We assume that job sizes and relative deadlines are bounded by CC and ∆∆ respectively. Furthermore, SmaxSmax denotes the maximal speed of the processor. In such a real-time system, a speed selection policy dynamically chooses (i.e.,on-line) the speed of the processor to execute the current, not yet finished, jobs. We say that an on-line speed policy is feasible if it is able to execute any sequence of jobs while meeting two constraints: the processor speed is always below SmaxSmax and no job misses its deadline. In this paper, we compare the feasibility region of four on-line speed selection policies in single-processor real-time systems, namely Optimal Available(OA)[1], Average Rate(AVR)[1],(BKP)[2], and aMarkovian Policy based on dynamic programming(MP)[3]. We prove the following results:• (OA)is feasible if and only if Smax≥C(h∆−1+1)Smax≥C(h∆−1+ 1), where hnhn is the n−thn-th harmonic number (hn=∑ni=11/i≈logn)(hn=∑ni=11/i≈logn).• (AVR) is feasible if and only if Smax≥Ch∆Smax≥Ch∆.• (BKP) is feasible if and only if Smax≥eC(wheree=exp(1))Smax≥eC(wheree= exp(1)).• (MP) is feasible if and only if Smax≥CSmax≥C. This is an optimal feasibility condition because when Smax<CSmax< C no policy can be feasible.This reinforces the interest of (MP) that is not only optimal for energy consumption (on average) but is also optimal regarding feasibility

    Discrete and Continuous Optimal Control for Energy Minimization in Real-Time Systems

    Get PDF
    International audienceThis paper presents a discrete time Markov Decision Process (MDP) to compute the optimal speed scaling policy to minimize the energy consumption of a single processor executing a finite set of jobs with real-time constraints. We further show that the optimal solution is the same when speed change decisions are taken at arrival times of the jobs as well as when decisions are taken in continuous time

    Faisabilité des politiques en-ligne dans les systèmes temps-réel

    No full text
    We consider a real-time system where a single processor with variable speed executes an infinite sequence of sporadic and independent jobs. We assume that job sizes and relative deadlines are bounded by CC and ∆∆ respectively. Furthermore, SmaxSmax denotes the maximal speed of the processor. In such a real-time system, a speed selection policy dynamically chooses (i.e.,on-line) the speed of the processor to execute the current, not yet finished, jobs. We say that an on-line speed policy is feasible if it is able to execute any sequence of jobs while meeting two constraints: the processor speed is always below SmaxSmax and no job misses its deadline. In this paper, we compare the feasibility region of four on-line speed selection policies in single-processor real-time systems, namely Optimal Available(OA)[1], Average Rate(AVR)[1],(BKP)[2], and aMarkovian Policy based on dynamic programming(MP)[3]. We prove the following results:• (OA)is feasible if and only if Smax≥C(h∆−1+1)Smax≥C(h∆−1+ 1), where hnhn is the n−thn-th harmonic number (hn=∑ni=11/i≈logn)(hn=∑ni=11/i≈logn).• (AVR) is feasible if and only if Smax≥Ch∆Smax≥Ch∆.• (BKP) is feasible if and only if Smax≥eC(wheree=exp(1))Smax≥eC(wheree= exp(1)).• (MP) is feasible if and only if Smax≥CSmax≥C. This is an optimal feasibility condition because when Smax<CSmax< C no policy can be feasible.This reinforces the interest of (MP) that is not only optimal for energy consumption (on average) but is also optimal regarding feasibility
    corecore