29 research outputs found

    EUROPEAN CONFERENCE ON QUEUEING THEORY 2016

    Get PDF
    International audienceThis booklet contains the proceedings of the second European Conference in Queueing Theory (ECQT) that was held from the 18th to the 20th of July 2016 at the engineering school ENSEEIHT, Toulouse, France. ECQT is a biannual event where scientists and technicians in queueing theory and related areas get together to promote research, encourage interaction and exchange ideas. The spirit of the conference is to be a queueing event organized from within Europe, but open to participants from all over the world. The technical program of the 2016 edition consisted of 112 presentations organized in 29 sessions covering all trends in queueing theory, including the development of the theory, methodology advances, computational aspects and applications. Another exciting feature of ECQT2016 was the institution of the Takács Award for outstanding PhD thesis on "Queueing Theory and its Applications"

    Dynamic allocation in multi-dimensional inventory models

    Get PDF

    Autonomous grid scheduling using probabilistic job runtime scheduling

    Get PDF
    Computational Grids are evolving into a global, service-oriented architecture – a universal platform for delivering future computational services to a range of applications of varying complexity and resource requirements. The thesis focuses on developing a new scheduling model for general-purpose, utility clusters based on the concept of user requested job completion deadlines. In such a system, a user would be able to request each job to finish by a certain deadline, and possibly to a certain monetary cost. Implementing deadline scheduling is dependent on the ability to predict the execution time of each queued job, and on an adaptive scheduling algorithm able to use those predictions to maximise deadline adherence. The thesis proposes novel solutions to these two problems and documents their implementation in a largely autonomous and self-managing way. The starting point of the work is an extensive analysis of a representative Grid workload revealing consistent workflow patterns, usage cycles and correlations between the execution times of jobs and its properties commonly collected by the Grid middleware for accounting purposes. An automated approach is proposed to identify these dependencies and use them to partition the highly variable workload into subsets of more consistent and predictable behaviour. A range of time-series forecasting models, applied in this context for the first time, were used to model the job execution times as a function of their historical behaviour and associated properties. Based on the resulting predictions of job runtimes a novel scheduling algorithm is able to estimate the latest job start time necessary to meet the requested deadline and sort the queue accordingly to minimise the amount of deadline overrun. The testing of the proposed approach was done using the actual job trace collected from a production Grid facility. The best performing execution time predictor (the auto-regressive moving average method) coupled to workload partitioning based on three simultaneous job properties returned the median absolute percentage error centroid of only 4.75%. This level of prediction accuracy enabled the proposed deadline scheduling method to reduce the average deadline overrun time ten-fold compared to the benchmark batch scheduler. Overall, the thesis demonstrates that deadline scheduling of computational jobs on the Grid is achievable using statistical forecasting of job execution times based on historical information. The proposed approach is easily implementable, substantially self-managing and better matched to the human workflow making it well suited for implementation in the utility Grids of the future

    Filter Scheduling Function Model In Internet Server: Resource Configuration, Performance Evaluation And Optimal Scheduling

    Get PDF
    ABSTRACT FILTER SCHEDULING FUNCTION MODEL IN INTERNET SERVER: RESOURCE CONFIGURATION, PERFORMANCE EVALUATION AND OPTIMAL SCHEDULING by MINGHUA XU August 2010 Advisor: Dr. Cheng-Zhong Xu Major: Computer Engineering Degree: Doctor of Philosophy Internet traffic often exhibits a structure with rich high-order statistical properties like selfsimilarity and long-range dependency (LRD). This greatly complicates the problem of server performance modeling and optimization. On the other hand, popularity of Internet has created numerous client-server or peer-to-peer applications, with most of them, such as online payment, purchasing, trading, searching, publishing and media streaming, being timing sensitive and/or financially critical. The scheduling policy in Internet servers is playing central role in satisfying service level agreement (SLA) and achieving savings and efficiency in operations. The increasing popularity of high-volume performance critical Internet applications is a challenge for servers to provide individual response-time guarantees. Existing tools like queuing models in most cases only hold in mean value analysis under the assumption of simplified traffic structures. Considering the fact that most Internet applications can tolerate a small percentage of deadline misses, we define a decay function model characterizes the relationship between the request delay constraint, deadline misses, and server capacity in a transfer function based filter system. The model is general for any time-series based or measurement based processes. Within the model framework, a relationship between server capacity, scheduling policy, and service deadline is established in formalism. Time-invariant (non-adaptive) resource allocation policies are design and analyzed in the time domain. For an important class of fixed-time allocation policies, optimality conditions with respect to the correlation of input traffic are established. The upper bound for server capacity and service level are derived with general Chebshev\u27s inequality, and extended to tighter boundaries for unimodal distributions by using VysochanskiPetunin\u27s inequality. For traffic with strong LRD, a design and analysis of the decay function model is done in the frequency domain. Most Internet traffic has monotonically decreasing strength of variation functions over frequency. For this type of input traffic, it is proved that optimal schedulers must have a convex structure. Uniform resource allocation is an extreme case of the convexity and is proved to be optimal for Poisson traffic. With an integration of the convex-structural principle, an enhance GPS policy improves the service quality significantly. Furthermore, it is shown that the presence of LRD in the input traffic results in shift of variation strength from high frequency to lower frequency bands, leading to a degradation of the service quality. The model is also extended to support server with different deadlines, and to derive an optimal time-variant (adaptive) resource allocation policy that minimizes server load variances and server resource demands. Simulation results show time-variant scheduling algorithm indeed outperforms time-invariant optimal decay function scheduler. Internet traffic has two major dynamic factors, the distribution of request size and the correlation of request arrival process. When applying decay function model as scheduler to random point process, corresponding two influences for server workload process is revealed as, first, sizing factor--interaction between request size distribution and scheduling functions, second, correlation factor--interaction between power spectrum of arrival process and scheduling function. For the second factor, it is known from this thesis that convex scheduling function will minimize its impact over server workload. Under the assumption of homogeneous scheduling function for all requests, it shows that uniform scheduling is optimal for the sizing factor. Further more, by analyzing the impact from queueing delay to scheduling function, it shows that queueing larger tasks vs. smaller ones leads to less reduction in sizing factor, but at the benefit of more decreasing in correlation factor in the server workload process. This shows the origin of optimality of shortest remain processing time (SRPT) scheduler
    corecore