43,697 research outputs found

    Distribution-graph based approach and extended tree growing technique in power-constrained block-test scheduling

    Get PDF
    A distribution-graph based scheduling algorithm is proposed together with an extended tree growing technique to deal with the problem of unequal-length block-test scheduling under power dissipation constraints. The extended tree growing technique is used in combination with the classical scheduling approach in order to improve the test concurrency having assigned power dissipation limits. Its goal is to achieve a balanced test power dissipation by employing a least mean square error function. The least mean square error function is a distribution-graph based global priority function. Test scheduling examples and experiments highlight in the end the efficiency of this approach towards a system-level test scheduling algorithm

    A combined tree growing technique for block-test scheduling under power constraints

    Get PDF
    A tree growing technique is used here together with classical scheduling algorithms in order to improve the test concurrency having assigned power dissipation limits. First of all, the problem of unequal-length block-test scheduling under power dissipation constraints is modeled as a tree growing problem. Then a combination of list and force-directed scheduling algorithms is adapted to tackle it. The goal of this approach is to achieve rapidly a test scheduling solution with a near-optimal test application time. This is initially achieved with the list approach. Then the power dissipation distribution of this solution is balanced by using a force-directed global priority function. The force-directed priority function is a distribution-graph based global priority function. A constant additive model is employed for power dissipation analysis and estimation. Based on test scheduling examples, the efficiency of this approach is discussed as compared to the other approaches

    How Unsplittable-Flow-Covering helps Scheduling with Job-Dependent Cost Functions

    Full text link
    Generalizing many well-known and natural scheduling problems, scheduling with job-specific cost functions has gained a lot of attention recently. In this setting, each job incurs a cost depending on its completion time, given by a private cost function, and one seeks to schedule the jobs to minimize the total sum of these costs. The framework captures many important scheduling objectives such as weighted flow time or weighted tardiness. Still, the general case as well as the mentioned special cases are far from being very well understood yet, even for only one machine. Aiming for better general understanding of this problem, in this paper we focus on the case of uniform job release dates on one machine for which the state of the art is a 4-approximation algorithm. This is true even for a special case that is equivalent to the covering version of the well-studied and prominent unsplittable flow on a path problem, which is interesting in its own right. For that covering problem, we present a quasi-polynomial time (1+ϵ)(1+\epsilon)-approximation algorithm that yields an (e+ϵ)(e+\epsilon)-approximation for the above scheduling problem. Moreover, for the latter we devise the best possible resource augmentation result regarding speed: a polynomial time algorithm which computes a solution with \emph{optimal }cost at 1+ϵ1+\epsilon speedup. Finally, we present an elegant QPTAS for the special case where the cost functions of the jobs fall into at most logn\log n many classes. This algorithm allows the jobs even to have up to logn\log n many distinct release dates.Comment: 2 pages, 1 figur

    Analysis, classification and comparison of scheduling techniques for software transactional memories

    Get PDF
    Transactional Memory (TM) is a practical programming paradigm for developing concurrent applications. Performance is a critical factor for TM implementations, and various studies demonstrated that specialised transaction/thread scheduling support is essential for implementing performance-effective TM systems. After one decade of research, this article reviews the wide variety of scheduling techniques proposed for Software Transactional Memories. Based on peculiarities and differences of the adopted scheduling strategies, we propose a classification of the existing techniques, and we discuss the specific characteristics of each technique. Also, we analyse the results of previous evaluation and comparison studies, and we present the results of a new experimental study encompassing techniques based on different scheduling strategies. Finally, we identify potential strengths and weaknesses of the different techniques, as well as the issues that require to be further investigated

    Content Distribution by Multiple Multicast Trees and Intersession Cooperation: Optimal Algorithms and Approximations

    Full text link
    In traditional massive content distribution with multiple sessions, the sessions form separate overlay networks and operate independently, where some sessions may suffer from insufficient resources even though other sessions have excessive resources. To cope with this problem, we consider the universal swarming approach, which allows multiple sessions to cooperate with each other. We formulate the problem of finding the optimal resource allocation to maximize the sum of the session utilities and present a subgradient algorithm which converges to the optimal solution in the time-average sense. The solution involves an NP-hard subproblem of finding a minimum-cost Steiner tree. We cope with this difficulty by using a column generation method, which reduces the number of Steiner-tree computations. Furthermore, we allow the use of approximate solutions to the Steiner-tree subproblem. We show that the approximation ratio to the overall problem turns out to be no less than the reciprocal of the approximation ratio to the Steiner-tree subproblem. Simulation results demonstrate that universal swarming improves the performance of resource-poor sessions with negligible impact to resource-rich sessions. The proposed approach and algorithm are expected to be useful for infrastructure-based content distribution networks with long-lasting sessions and relatively stable network environment

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load
    corecore