1,253 research outputs found

    Speed-scaling with no Preemptions

    Full text link
    We revisit the non-preemptive speed-scaling problem, in which a set of jobs have to be executed on a single or a set of parallel speed-scalable processor(s) between their release dates and deadlines so that the energy consumption to be minimized. We adopt the speed-scaling mechanism first introduced in [Yao et al., FOCS 1995] according to which the power dissipated is a convex function of the processor's speed. Intuitively, the higher is the speed of a processor, the higher is the energy consumption. For the single-processor case, we improve the best known approximation algorithm by providing a (1+ϵ)αB~α(1+\epsilon)^{\alpha}\tilde{B}_{\alpha}-approximation algorithm, where B~α\tilde{B}_{\alpha} is a generalization of the Bell number. For the multiprocessor case, we present an approximation algorithm of ratio B~α((1+ϵ)(1+wmaxwmin))α\tilde{B}_{\alpha}((1+\epsilon)(1+\frac{w_{\max}}{w_{\min}}))^{\alpha} improving the best known result by a factor of (52)α1(wmaxwmin)α(\frac{5}{2})^{\alpha-1}(\frac{w_{\max}}{w_{\min}})^{\alpha}. Notice that our result holds for the fully heterogeneous environment while the previous known result holds only in the more restricted case of parallel processors with identical power functions

    Randomized algorithms for fully online multiprocessor scheduling with testing

    Full text link
    We contribute the first randomized algorithm that is an integration of arbitrarily many deterministic algorithms for the fully online multiprocessor scheduling with testing problem. When there are two machines, we show that with two component algorithms its expected competitive ratio is already strictly smaller than the best proven deterministic competitive ratio lower bound. Such algorithmic results are rarely seen in the literature. Multiprocessor scheduling is one of the first combinatorial optimization problems that have received numerous studies. Recently, several research groups examined its testing variant, in which each job JjJ_j arrives with an upper bound uju_j on the processing time and a testing operation of length tjt_j; one can choose to execute JjJ_j for uju_j time, or to test JjJ_j for tjt_j time to obtain the exact processing time pjp_j followed by immediately executing the job for pjp_j time. Our target problem is the fully online version, in which the jobs arrive in sequence so that the testing decision needs to be made at the job arrival as well as the designated machine. We propose an expected (φ+3+1)(3.1490)(\sqrt{\varphi + 3} + 1) (\approx 3.1490)-competitive randomized algorithm as a non-uniform probability distribution over arbitrarily many deterministic algorithms, where φ=5+12\varphi = \frac {\sqrt{5} + 1}2 is the Golden ratio. When there are two machines, we show that our randomized algorithm based on two deterministic algorithms is already expected 3φ+3137φ4(2.1839)\frac {3 \varphi + 3 \sqrt{13 - 7\varphi}}4 (\approx 2.1839)-competitive. Besides, we use Yao's principle to prove lower bounds of 1.66821.6682 and 1.65221.6522 on the expected competitive ratio for any randomized algorithm at the presence of at least three machines and only two machines, respectively, and prove a lower bound of 2.21172.2117 on the competitive ratio for any deterministic algorithm when there are only two machines.Comment: 21 pages with 1 plot; an extended abstract to be submitte

    Parallel algorithms for two processors precedence constraint scheduling

    Get PDF
    The final publication is available at link.springer.comPeer ReviewedPostprint (author's final draft

    Improved Rejection Penalty Algorithm with Multiprocessor Rejection Technique

    Get PDF
    This paper deals with multiprocessor scheduling with rejection technique where each job is provided with processing time and a given penalty cost. If the job satisfies the acceptance condition, it will schedule in the least loaded identical parallel machine else job is rejected. In this way its penalty cost is calculated. Our objective is to minimize the makespan of the scheduled job and to minimize the sum of the penalties of rejected jobs. We have merged ‘CHOOSE ‘and ‘REJECTION PENALTY’ algorithm to reduce the sum of penalties cost and makespan. Our proposed ‘Improved Reject penalty algorithm’ reduce competitive ratio, which in turn enhances the efficiency of the on-line algorithm. By applying our new on-line technique, we got the lower bound of our algorithm is is 1.286 which is far better from the existing algorithms whose competitive ratio is at 1.819. In our approach we have consider non-preemption scheduling technique

    Improved CRPD analysis and a secure scheduler against information leakage in real-time systems

    Get PDF
    Real-time systems are widely applied to the time-critical fields. In order to guarantee that all tasks can be completed on time, predictability becomes a necessary factor when designing a real-time system. Due to more and more requirements about the performance in the real-time embedded system, the cache memory is introduced to the real-time embedded systems. However, the cache behavior is difficult to predict since the data will be loaded either on the cache or the memory. In order to taking the unexpected overhead, execution time are often enlarged by a certain (huge) factor. However, this will cause a waste of computation resource. Hence, in this thesis, we first integrate the cache-related preemption delay to the previous global earliest deadline first schedulability analysis in the direct-mapped cache. Moreover, several analyses for tighter G-EDF schedulability tests are conducted based on the refined estimation of the maximal number of preemptions. The experimental study is conducted to demonstrate the performance of the proposed methods. Furthermore, Under the classic scheduling mechanisms, the execution patterns of tasks on such a system can be easily derived. Therefore, in the second part of the thesis, a novel scheduler, roulette wheel scheduler (RWS), is proposed to randomize the task execution pattern. Unlike traditional schedulers, RWS assigns probabilities to each task at predefined scheduling points, and the choice for execution is randomized, such that the execution pattern is no longer fixed. We apply the concept of schedule entropy to measure the amount of uncertainty introduced by any randomized scheduler, which reflects the unlikelihood of for such attacks to success. Comparing to existing randomized scheduler that gives all eligible tasks equal likelihood at a given time point, the proposed method adjusted such values so that the entropy can be greatly increased --Abstract, page iii

    Energy-efficient algorithms for non-preemptive speed-scaling

    Full text link
    We improve complexity bounds for energy-efficient speed scheduling problems for both the single processor and multi-processor cases. Energy conservation has become a major concern, so revisiting traditional scheduling problems to take into account the energy consumption has been part of the agenda of the scheduling community for the past few years. We consider the energy minimizing speed scaling problem introduced by Yao et al. where we wish to schedule a set of jobs, each with a release date, deadline and work volume, on a set of identical processors. The processors may change speed as a function of time and the energy they consume is the α\alphath power of its speed. The objective is then to find a feasible schedule which minimizes the total energy used. We show that in the setting with an arbitrary number of processors where all work volumes are equal, there is a 2(1+ε)(5(1+ε))α1B~α=Oα(1)2(1+\varepsilon)(5(1+\varepsilon))^{\alpha -1}\tilde{B}_{\alpha}=O_{\alpha}(1) approximation algorithm, where B~α\tilde{B}_{\alpha} is the generalized Bell number. This is the first constant factor algorithm for this problem. This algorithm extends to general unequal processor-dependent work volumes, up to losing a factor of ((1+r)r2)α(\frac{(1+r)r}{2})^{\alpha} in the approximation, where rr is the maximum ratio between two work volumes. We then show this latter problem is APX-hard, even in the special case when all release dates and deadlines are equal and rr is 4. In the single processor case, we introduce a new linear programming formulation of speed scaling and prove that its integrality gap is at most 12α112^{\alpha -1}. As a corollary, we obtain a (12(1+ε))α1(12(1+\varepsilon))^{\alpha -1} approximation algorithm where there is a single processor, improving on the previous best bound of 2α1(1+ε)αB~α2^{\alpha-1}(1+\varepsilon)^{\alpha}\tilde{B}_{\alpha} when α25\alpha \ge 25
    corecore