56 research outputs found

    An improved analysis of SRPT scheduling algorithm on the basis of functional optimization

    Get PDF
    The competitive performance of the SRPT scheduling algorithm has been open for a long time except for being 2-competitive, where the objective is to minimize the total completion time. Chung et al. proved that the SRPT algorithm is 1.857-competitive. In this paper we improve their analysis and show a 1.792-competitiveness. We clearly mention that our result is not the best so far, since Sitters recently proved the algorithm is 1.250-competitive. Nevertheless, it is still well worth reporting our analytical method; our analysis is based on the modern functional optimization, which can scarcely be found in the literature on the analysis of algorithms. Our aim is to illustrate the potentiality of functional optimization with a concrete application. (C) 2012 Elsevier B.V. All rights reserved.ArticleINFORMATION PROCESSING LETTERS. 112(23): 911-915 (2012)journal articl

    An Improved Analysis of SRPT Scheduling Algorithm on the Basis of Functional Optimization *

    Get PDF
    Abstract The competitive performance of the SRPT scheduling algorithm has been open for a long time except for being 2-competitive, where the objective is to minimize the total completion time. Chung et al. proved that the SRPT algorithm is 1.857-competitive. In this paper we improve their analysis and show a 1.792-competitiveness. We clearly mention that our result is not the best so far, since Sitters recently proved the algorithm is 1.250-competitive. Nevertheless, it is still well worth reporting our analytical method; our analysis is based on the modern functional optimization, which can scarcely be found in the literature on the analysis of algorithms. Our aim is to illustrate the potentiality of functional optimization with a concrete application

    Efficient algorithms for average completion time scheduling

    Get PDF

    Speed Scaling for Energy Aware Processor Scheduling: Algorithms and Analysis

    Get PDF
    We present theoretical algorithmic research of processor scheduling in an energy aware environment using the mechanism of speed scaling. We have two main goals in mind. The first is the development of algorithms that allow more energy efficient utilization of resources. The second goal is to further our ability to reason abstractly about energy in computing devices by developing and understanding algorithmic models of energy management. In order to achieve these goals, we investigate three classic process scheduling problems in the setting of a speed scalable processor. Integer stretch is one of the most obvious classical scheduling objectives that has yet to be considered in the speed scaling setting. For the objective of integer stretch plus energy, we give an online scheduling algorithm that, for any input, produces a schedule with integer stretch plus energy that is competitive with the integer stretch plus energy of any schedule that finishes all jobs. Second, we consider the problem of finding the schedule, S, that minimizes some quality of service objective Q plus B times the energy used by the processor. This schedule, S, is the optimal energy trade-off schedule in the sense that: no schedule can have better quality of service given the current investment of energy used by S, and, an additional investment of one unit of energy is insufficient to improve the quality of service by more than B. When Q is fractional weighted flow, we show that the optimal energy trade-off schedule is unique and has a simple structure, thus making it easy to check the optimality of a schedule. We further show that the optimal energy trade-off schedule can be computed with a natural homotopic optimization algorithm. Lastly, we consider the speed scaling problem where the quality of service objective is deadline feasibility and the power objective is temperature. In the case of batched jobs, we give a simple algorithm to compute the optimal schedule. For general instances, we give a new online algorithm and show that it has a competitive ratio that is an order of magnitude better than the best previously known for this problem

    Simulation of production scheduling in manufacturing systems

    Get PDF
    Research into production scheduling environments has been primarily concerned with developing local priority rules for selecting jobs from a queue to be processed on a set of individual machines. Most of the research deals with the scheduling problems in terms of the evaluation of priority rules with respect to given criteria. These criteria have a direct effect on the production cost, such as mean make-span, flow-time, job lateness, m-process inventory and machine idle time. The project under study consists of the following two phases. The first is to deal with the development of computer models for the flow-shop problem, which obtain the optimum make-span and near-optimum solutions for the well-used criteria in the production scheduling priority rules. The second is to develop experimental analysis using a simulation technique, for the two main manufacturing systems, 1. Job-shop 2. Flexible Manufacturing System The two manufacturing types were investigated under the following conditions i. Dynamic problem conditions ii. Different operation time distributions iii. Different shop loads iv. Seven replications per experiment with different streams of random number v. The approximately steady state point for each replication was obtained. In the FMS, the material handling system used was the automated guided Vehicles (AGVs), buffer station and load/ unload area were also used. The aim of these analyses is to deal with the effectiveness of the priority rules on the selected criteria performance. The SIMAN software simulation was used for these studies

    Intelligent shop scheduling for semiconductor manufacturing

    Get PDF
    Semiconductor market sales have expanded massively to more than 200 billion dollars annually accompanied by increased pressure on the manufacturers to provide higher quality products at lower cost to remain competitive. Scheduling of semiconductor manufacturing is one of the keys to increasing productivity, however the complexity of manufacturing high capacity semiconductor devices and the cost considerations mean that it is impossible to experiment within the facility. There is an immense need for effective decision support models, characterizing and analyzing the manufacturing process, allowing the effect of changes in the production environment to be predicted in order to increase utilization and enhance system performance. Although many simulation models have been developed within semiconductor manufacturing very little research on the simulation of the photolithography process has been reported even though semiconductor manufacturers have recognized that the scheduling of photolithography is one of the most important and challenging tasks due to complex nature of the process. Traditional scheduling techniques and existing approaches show some benefits for solving small and medium sized, straightforward scheduling problems. However, they have had limited success in solving complex scheduling problems with stochastic elements in an economic timeframe. This thesis presents a new methodology combining advanced solution approaches such as simulation, artificial intelligence, system modeling and Taguchi methods, to schedule a photolithography toolset. A new structured approach was developed to effectively support building the simulation models. A single tool and complete toolset model were developed using this approach and shown to have less than 4% deviation from actual production values. The use of an intelligent scheduling agent for the toolset model shows an average of 15% improvement in simulated throughput time and is currently in use for scheduling the photolithography toolset in a manufacturing plant

    Filter Scheduling Function Model In Internet Server: Resource Configuration, Performance Evaluation And Optimal Scheduling

    Get PDF
    ABSTRACT FILTER SCHEDULING FUNCTION MODEL IN INTERNET SERVER: RESOURCE CONFIGURATION, PERFORMANCE EVALUATION AND OPTIMAL SCHEDULING by MINGHUA XU August 2010 Advisor: Dr. Cheng-Zhong Xu Major: Computer Engineering Degree: Doctor of Philosophy Internet traffic often exhibits a structure with rich high-order statistical properties like selfsimilarity and long-range dependency (LRD). This greatly complicates the problem of server performance modeling and optimization. On the other hand, popularity of Internet has created numerous client-server or peer-to-peer applications, with most of them, such as online payment, purchasing, trading, searching, publishing and media streaming, being timing sensitive and/or financially critical. The scheduling policy in Internet servers is playing central role in satisfying service level agreement (SLA) and achieving savings and efficiency in operations. The increasing popularity of high-volume performance critical Internet applications is a challenge for servers to provide individual response-time guarantees. Existing tools like queuing models in most cases only hold in mean value analysis under the assumption of simplified traffic structures. Considering the fact that most Internet applications can tolerate a small percentage of deadline misses, we define a decay function model characterizes the relationship between the request delay constraint, deadline misses, and server capacity in a transfer function based filter system. The model is general for any time-series based or measurement based processes. Within the model framework, a relationship between server capacity, scheduling policy, and service deadline is established in formalism. Time-invariant (non-adaptive) resource allocation policies are design and analyzed in the time domain. For an important class of fixed-time allocation policies, optimality conditions with respect to the correlation of input traffic are established. The upper bound for server capacity and service level are derived with general Chebshev\u27s inequality, and extended to tighter boundaries for unimodal distributions by using VysochanskiPetunin\u27s inequality. For traffic with strong LRD, a design and analysis of the decay function model is done in the frequency domain. Most Internet traffic has monotonically decreasing strength of variation functions over frequency. For this type of input traffic, it is proved that optimal schedulers must have a convex structure. Uniform resource allocation is an extreme case of the convexity and is proved to be optimal for Poisson traffic. With an integration of the convex-structural principle, an enhance GPS policy improves the service quality significantly. Furthermore, it is shown that the presence of LRD in the input traffic results in shift of variation strength from high frequency to lower frequency bands, leading to a degradation of the service quality. The model is also extended to support server with different deadlines, and to derive an optimal time-variant (adaptive) resource allocation policy that minimizes server load variances and server resource demands. Simulation results show time-variant scheduling algorithm indeed outperforms time-invariant optimal decay function scheduler. Internet traffic has two major dynamic factors, the distribution of request size and the correlation of request arrival process. When applying decay function model as scheduler to random point process, corresponding two influences for server workload process is revealed as, first, sizing factor--interaction between request size distribution and scheduling functions, second, correlation factor--interaction between power spectrum of arrival process and scheduling function. For the second factor, it is known from this thesis that convex scheduling function will minimize its impact over server workload. Under the assumption of homogeneous scheduling function for all requests, it shows that uniform scheduling is optimal for the sizing factor. Further more, by analyzing the impact from queueing delay to scheduling function, it shows that queueing larger tasks vs. smaller ones leads to less reduction in sizing factor, but at the benefit of more decreasing in correlation factor in the server workload process. This shows the origin of optimality of shortest remain processing time (SRPT) scheduler

    A Design of a Generic Profile-Based Queue System

    Get PDF
    Website and server hosting accounts impose resource limits which restrict the processing power available to applications. One technique to bypass these restrictions is to split up large jobs into smaller tasks that can then be queued and processed task by task. This is a fairly common need. However, different application jobs can differ widely in nature and in their requirements. Thus, a queue system built for one job type may not be entirely suitable for another. This situation could result in the having to implement separate, additional queue systems for different needs. This research proposes a generic queue core design that can accommodate a large variety of job types by providing a basic set of features which can be easily extended to add specificity. The design includes a detailed discussion on queue implementation, scheduling, directory structure and business tier logic. Furthermore, it features highly configurable, time-sensitive performance management that can be customized for any job type. This is provided as the ability to indicate desired performance profiles for any given slot of time during the week. Actual performance data based on the usage of a prototype is also included to demonstrate the significant advantage of using the queue system

    Scheduling for today’s computer systems: bridging theory and practice

    Get PDF
    Scheduling is a fundamental technique for improving performance in computer systems. From web servers to routers to operating systems, how the bottleneck device is scheduled has an enormous impact on the performance of the system as a whole. Given the immense literature studying scheduling, it is easy to think that we already understand enough about scheduling. But, modern computer system designs have highlighted a number of disconnects between traditional analytic results and the needs of system designers. In particular, the idealized policies, metrics, and models used by analytic researchers do not match the policies, metrics, and scenarios that appear in real systems. The goal of this thesis is to take a step towards modernizing the theory of scheduling in order to provide results that apply to today’s computer systems, and thus ease the burden on system designers. To accomplish this goal, we provide new results that help to bridge each of the disconnects mentioned above. We will move beyond the study of idealized policies by introducing a new analytic framework where the focus is on scheduling heuristics and techniques rather than individual policies. By moving beyond the study of individual policies, our results apply to the complex hybrid policies that are often used in practice. For example, our results enable designers to understand how the policies that favor small job sizes are affected by the fact that real systems only have estimates of job sizes. In addition, we move beyond the study of mean response time and provide results characterizing the distribution of response time and the fairness of scheduling policies. These results allow us to understand how scheduling affects QoS guarantees and whether favoring small job sizes results in large job sizes being treated unfairly. Finally, we move beyond the simplified models traditionally used in scheduling research and provide results characterizing the effectiveness of scheduling in multiserver systems and when users are interactive. These results allow us to answer questions about the how to design multiserver systems and how to choose a workload generator when evaluating new scheduling designs
    corecore