25 research outputs found

    SRPT optimally utilizes faster machines to minimize flow time

    Full text link

    Online Scheduling on Identical Machines using SRPT

    Full text link
    Due to its optimality on a single machine for the problem of minimizing average flow time, Shortest-Remaining-Processing-Time (\srpt) appears to be the most natural algorithm to consider for the problem of minimizing average flow time on multiple identical machines. It is known that \srpt achieves the best possible competitive ratio on multiple machines up to a constant factor. Using resource augmentation, \srpt is known to achieve total flow time at most that of the optimal solution when given machines of speed 21m2- \frac{1}{m}. Further, it is known that \srpt's competitive ratio improves as the speed increases; \srpt is ss-speed 1s\frac{1}{s}-competitive when s21ms \geq 2- \frac{1}{m}. However, a gap has persisted in our understanding of \srpt. Before this work, the performance of \srpt was not known when \srpt is given (1+\eps)-speed when 0 < \eps < 1-\frac{1}{m}, even though it has been thought that \srpt is (1+\eps)-speed O(1)O(1)-competitive for over a decade. Resolving this question was suggested in Open Problem 2.9 from the survey "Online Scheduling" by Pruhs, Sgall, and Torng \cite{PruhsST}, and we answer the question in this paper. We show that \srpt is \emph{scalable} on mm identical machines. That is, we show \srpt is (1+\eps)-speed O(\frac{1}{\eps})-competitive for \eps >0. We complement this by showing that \srpt is (1+\eps)-speed O(\frac{1}{\eps^2})-competitive for the objective of minimizing the k\ell_k-norms of flow time on mm identical machines. Both of our results rely on new potential functions that capture the structure of \srpt. Our results, combined with previous work, show that \srpt is the best possible online algorithm in essentially every aspect when migration is permissible.Comment: Accepted for publication at SODA. This version fixes an error in a preliminary versio

    Online Scheduling on Identical Machines Using SRPT

    Get PDF
    Due to its optimality on a single machine for the problem of minimizing average flow time, Shortest-Remaining-Processing-Time (SRPT) appears to be the most natural algorithm to consider for the problem of minimizing average flow time on multiple identical machines. It is known that SRPT achieves the best possible competitive ratio on multiple machines up to a constant factor. Using resource augmentation, SRPT is known to achieve total flow time at most that of the optimal solution when given machines of speed 21/m2- 1/m. Further, it is known that SRPT's competitive ratio improves as the speed increases; SRPT is ss-speed 1/s1/s-competitive when s21/ms \geq 2 - 1/m. However, a gap has persisted in our understanding of SRPT. Before this work, we did not know the performance of SRPT when given machines of speed 1+\eps for any 0 < \eps < 1 - 1/m. We answer the question in this thesis. We show that SRPT is scalable on mm identical machines. That is, we show SRPT is (1+\eps)-speed O(1/\eps)-competitive for any \eps > 0. We also show that SRPT is (1+\eps)-speed O(1/\eps^2)-competitive for the objective of minimizing the lkl_k norms of flow time on mm identical machines. Both of our results rely on new potential functions that capture the structure of SRPT. Our results, combined with previous work, show that SRPT is the best possible online algorithm in essentially every aspect when migration is permissible

    Extra unit-speed machines are almost as powerful as speedy machines for flow time scheduling

    Get PDF
    We study online scheduling of jobs to minimize the flow time and stretch on parallel machines. We consider algorithms that are given extra resources so as to compensate for the lack of future information. Recent results show that a modest increase in machine speed can provide very competitive performance; in particular, using O(1) times faster machines, the algorithm SRPT (shortest remaining processing time) is 1-competitive for both flow time [C. A. Phillips et al., in Proceedings of STOC, ACM, New York, 1997, pp. 140-149] and stretch [W. T. Chan et al., in Proceedings of MFCS, Springer-Verlag, Berlin, 2005, pp. 236-247] and HDF (highest density first) is O(1)-competitive for weighted flow time [L. Becchetti et al., in Proceedings of RANDOM-APPROX, Springer-Verlag, Berlin, 2001, pp. 36-47]. Using extra unit-speed machines instead of faster machines to achieve competitive performance is more challenging, as a faster machine can speed up a job but extra unit-speed machines cannot. This paper gives a nontrivial relationship between the extra-speed and extra-machine analyses. It shows that competitive results via faster machines can be transformed to similar results via extra machines, hence giving the first algorithms that, using O(1) times unit-speed machines, are 1-competitive for flow time and stretch and O(1)-competitive for weighted flow time. © 2008 Society for Industrial and Applied Mathematics.published_or_final_versio

    Minimizing Flow Time in the Wireless Gathering Problem

    Get PDF
    We address the problem of efficient data gathering in a wireless network through multi-hop communication. We focus on the objective of minimizing the maximum flow time of a data packet. We prove that no polynomial time algorithm for this problem can have approximation ratio less than \Omega(m^{1/3) when mm packets have to be transmitted, unless P=NPP = NP. We then use resource augmentation to assess the performance of a FIFO-like strategy. We prove that this strategy is 5-speed optimal, i.e., its cost remains within the optimal cost if we allow the algorithm to transmit data at a speed 5 times higher than that of the optimal solution we compare to

    A Study of Time and Energy Efficient Algorithms for Parallel and Heterogeneous Computing

    Get PDF
    This PhD project is motivated by the need to develop and achieve better and energy efficient computing through the use of parallelism and heterogeneous systems. Our contribution consists of both theoretical aspects, as well as in-depth and comprehensive empirical studies that aim to provide more insight into parallel and heterogeneous computing. Our first problem is a theoretical problem that focuses on the scheduling of a special category of jobs known as deteriorating jobs. These kind of jobs will require more effort to complete them if postponed to a later time. They are intended to model several industrial processes including steel production, fire-fighting and financial management. We study the problem in the context of parallel machine scheduling in an online setting where jobs have arbitrary release times. Our main results show that List Scheduling is (1+bmax)(1+b_{max})-competitive and that no deterministic algorithm is better than (1+bmax)11m(1+b_{max})^{1-\frac{1}{m}}, where bmaxb_{max} is the largest deteriorating rate. We also extend our results to online deterministic algorithms and show that no deterministic online algorithm is better than (1+bmax)(1+b_{max})-competitive. Our next study concerns the scheduling of nn jobs with precedence constraints on mm parallel machines. We are interested in the precedence constraint known as chain precedence constraint where each job can have at most one predecessor and at most one successor. The jobs are modelled as directed acyclic graphs where nodes represent the jobs and edges represent the precedence constraints between jobs. The jobs have a strict deadline that must be met. The parallel machines are considered to be unrelated and a communication network connects each pair of machines. Execution of the jobs on the machines as well as communication across the network incurs costs in the form of time and energy. These costs are given by cost matrices that covers processing and communication. The goal is to construct a feasible schedule that minimizes the total energy required to execute the chain of jobs on the machines, such that all deadlines are met. We present a dynamic programming solution to the problem that leads to a pseudo polynomial time algorithm with running time O(nm2dmax)O(nm^2d_{max}), where dmaxd_{max} is the largest deadline. We show that the algorithm computes an optimal schedule where one exists. We then proceed to a similar problem that involves the scheduling of jobs to minimize flow time plus energy. This problem is based on a dynamic speed scaling heuristic in literature that is able to adjust the speed of a processor based on the number of \emph{active jobs}, called AJC. We present a comprehensive empirical study that consists of several job selection, speed selection and processor allocation heuristics. We also consider both single processor and multi processor settings. Our main goal is to investigate the viability of designing a fixed-speed counterpart for AJC, that is not as computationally intensive as AJC, while being very simple. We also evaluate the performance of this fixed speed heuristic and compare it with that of AJC. Our fourth and final study involves the use of graphics processing unit (GPU) as an accelerator for compute intensive tasks. The GPU has become a very popular multi processor for heterogeneous computing both from an economical point of view and performance standpoint. Firstly, we contribute to the development of a Bioinformatics tool, called GapsMis, by implementing a heterogeneous version that uses graphics processors for acceleration. GapsMis is a tool designed for the alignment of sequences, like protein and DNA sequences, and allows for the insertion of gaps in the alignment. Then we present a case study that aims to highlight the various aspects, including benefits and challenges, involved in developing heterogeneous applications that is vendor-agnostic. In order to do this we select four algorithms as case studies including GapsMis and the algorithm presented in our second problem. The other two algorithms are based on the Velocity-Verlet integration and the Fruchterman-Reingold force-based method for graph layout. We make use of the Open Computing Language (OpenCL) and C++ for implementation of the algorithms on a range of graphics processors from Advanced Micro Devices (AMD) and NVIDIA Corporation. We evaluate several factors that can affect performance of these applications on each hardware. We also compare the performance of our algorithms in a multi-GPU setting and against single and multi-core CPU implementations. Furthermore, several metrics are defined to capture several aspects of performance including execution time of application kernel(s), execution time of application including communication times, throughput, power and energy consumption

    Intelligent shop scheduling for semiconductor manufacturing

    Get PDF
    Semiconductor market sales have expanded massively to more than 200 billion dollars annually accompanied by increased pressure on the manufacturers to provide higher quality products at lower cost to remain competitive. Scheduling of semiconductor manufacturing is one of the keys to increasing productivity, however the complexity of manufacturing high capacity semiconductor devices and the cost considerations mean that it is impossible to experiment within the facility. There is an immense need for effective decision support models, characterizing and analyzing the manufacturing process, allowing the effect of changes in the production environment to be predicted in order to increase utilization and enhance system performance. Although many simulation models have been developed within semiconductor manufacturing very little research on the simulation of the photolithography process has been reported even though semiconductor manufacturers have recognized that the scheduling of photolithography is one of the most important and challenging tasks due to complex nature of the process. Traditional scheduling techniques and existing approaches show some benefits for solving small and medium sized, straightforward scheduling problems. However, they have had limited success in solving complex scheduling problems with stochastic elements in an economic timeframe. This thesis presents a new methodology combining advanced solution approaches such as simulation, artificial intelligence, system modeling and Taguchi methods, to schedule a photolithography toolset. A new structured approach was developed to effectively support building the simulation models. A single tool and complete toolset model were developed using this approach and shown to have less than 4% deviation from actual production values. The use of an intelligent scheduling agent for the toolset model shows an average of 15% improvement in simulated throughput time and is currently in use for scheduling the photolithography toolset in a manufacturing plant
    corecore