1,266 research outputs found

    Probabilistic alternatives for competitive analysis

    Get PDF
    In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: it sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis.operations research and management science;

    08071 Abstracts Collection -- Scheduling

    Get PDF
    From 10.02. to 15.02., the Dagstuhl Seminar 08071 ``Scheduling\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Effiziente Algorithmen

    Get PDF
    [no abstract available

    On robust online scheduling algorithms

    Get PDF
    While standard parallel machine scheduling is concerned with good assignments of jobs to machines, we aim to understand how the quality of an assignment is affected if the jobs' processing times are perturbed and therefore turn out to be longer (or shorter) than declared. We focus on online scheduling with perturbations occurring at any time, such as in railway systems when trains are late. For a variety of conditions on the severity of perturbations, we present bounds on the worst case ratio of two makespans. For the first makespan, we let the online algorithm assign jobs to machines, based on the non-perturbed processing times. We compute the makespan by replacing each job's processing time with its perturbed version while still sticking to the computed assignment. The second is an optimal offline solution for the perturbed processing times. The deviation of this ratio from the competitive ratio of the online algorithm tells us about the "price of perturbations”. We analyze this setting for Graham's algorithm, and among other bounds show a competitive ratio of 2 for perturbations decreasing the processing time of a job arbitrarily, and a competitive ratio of less than 2.5 for perturbations doubling the processing time of a job. We complement these results by providing lower bounds for any online algorithm in this setting. Finally, we propose a risk-aware online algorithm tailored for the possible bounded increase of the processing time of one job, and we show that this algorithm can be worse than Graham's algorithm in some case

    On smoothed analysis of quicksort and Hoare's find

    Get PDF
    We provide a smoothed analysis of Hoare's find algorithm, and we revisit the smoothed analysis of quicksort. Hoare's find algorithm - often called quickselect or one-sided quicksort - is an easy-to-implement algorithm for finding the k-th smallest element of a sequence. While the worst-case number of comparisons that Hoare’s find needs is Theta(n^2), the average-case number is Theta(n). We analyze what happens between these two extremes by providing a smoothed analysis. In the first perturbation model, an adversary specifies a sequence of n numbers of [0,1], and then, to each number of the sequence, we add a random number drawn independently from the interval [0,d]. We prove that Hoare's find needs Theta(n/(d+1) sqrt(n/d) + n) comparisons in expectation if the adversary may also specify the target element (even after seeing the perturbed sequence) and slightly fewer comparisons for finding the median. In the second perturbation model, each element is marked with a probability of p, and then a random permutation is applied to the marked elements. We prove that the expected number of comparisons to find the median is Omega((1−p)n/p log n). Finally, we provide lower bounds for the smoothed number of comparisons of quicksort and Hoare’s find for the median-of-three pivot rule, which usually yields faster algorithms than always selecting the first element: The pivot is the median of the first, middle, and last element of the sequence. We show that median-of-three does not yield a significant improvement over the classic rule

    Assessment of Response Time for New Multi Level Feedback Queue Scheduler

    Full text link
    Response time is one of the characteristics of scheduler, happens to be a prominent attribute of any CPU scheduling algorithm. The proposed New Multi Level Feedback Queue [NMLFQ] Scheduler is compared with dynamic, real time, Dependent Activity Scheduling Algorithm (DASA) and Lockes Best Effort Scheduling Algorithm (LBESA). We abbreviated beneficial result of NMLFQ scheduler in comparison with dynamic best effort schedulers with respect to response time.Comment: 7 pages, 5 figure

    On Intelligent Mitigation of Process Starvation In Multilevel Feedback Queue Scheduling

    Get PDF
    CPU time-share process schedulers for computer operating systems have existed since Corbato published his paper on the Compatible Time Sharing System in 1962 [8]. With this new type of scheduler came the need to effectively divide CPU time between N processes, where N could be 2 or more processes. Modern time-sharing process schedulers which have been developed in the decades since have been designed to favor shorter, interactive processes over long-running processes, especially when incoming demand for CPU time exceeds supply and process starvation is inevitable. These schedulers, including Linux CFS, FreeBSD Ule, and the Solaris Fair Share Scheduler, are all effective at favoring interactive processes under starvation conditions. Sometimes it’s not desirable that long-running processes be sacrificed altogether, but none of these schedulers have safeguards under starvation conditions. This thesis revisits and extends the research conducted in [13], in which it was demonstrated that starvation of long-running processes could be safely and effectively mitigated without adversely affecting the performance of shorter, interactive processes. The questions this thesis will answer are: Can MLFQ-NS, proposed in [13], be compared to other modern process schedulers? Can MLFQ-NS be improved? To answer the first question, a scheduler must be found which is similar enough to MFLQ for a direct comparison. This will require a survey of current schedulers. To answer the second question, the research conducted in [13] must be duplicated MLFQ-NS to ascertain the following: How much diverted time is actually used? Why does MLFQ-NS become ineffective past a certain system-load threshold, i.e. stop real- locating time to long-runnning processes? In this research, the original work was duplicated in simulations to validate previous re- sults, and determine why MLFQ-NS became ineffective after incoming CPU time demand exceeds a threshold. Research was conducted in order to determine if starvation mitigation in MLFQ-NS could be compared to other process schedulers used in production, with the conclusion that recent emphasis on priority scheduling and heurstic interactivity determination makes such a comparison impossible. Research then continued with simulations in which MLFQ-NS was given different run- time arguments than original simulations. Investigations into those results led to an algorithmic modification to MLFQ-NS called MLFQ-IM and analysis of simulations of MLFQ-IM. Conclu- sions about the effectiveness of MLFQ-IM will be explored. Finally, ideas for future research are offered

    Probabilistic alternatives for competitive analysis

    Get PDF

    The Overlooked Potential of Generalized Linear Models in Astronomy - I: Binomial Regression

    Get PDF
    Revealing hidden patterns in astronomical data is often the path to fundamental scientific breakthroughs; meanwhile the complexity of scientific inquiry increases as more subtle relationships are sought. Contemporary data analysis problems often elude the capabilities of classical statistical techniques, suggesting the use of cutting edge statistical methods. In this light, astronomers have overlooked a whole family of statistical techniques for exploratory data analysis and robust regression, the so-called Generalized Linear Models (GLMs). In this paper -- the first in a series aimed at illustrating the power of these methods in astronomical applications -- we elucidate the potential of a particular class of GLMs for handling binary/binomial data, the so-called logit and probit regression techniques, from both a maximum likelihood and a Bayesian perspective. As a case in point, we present the use of these GLMs to explore the conditions of star formation activity and metal enrichment in primordial minihaloes from cosmological hydro-simulations including detailed chemistry, gas physics, and stellar feedback. We predict that for a dark mini-halo with metallicity ≈1.3×10−4Z⹀\approx 1.3 \times 10^{-4} Z_{\bigodot}, an increase of 1.2×10−21.2 \times 10^{-2} in the gas molecular fraction, increases the probability of star formation occurrence by a factor of 75%. Finally, we highlight the use of receiver operating characteristic curves as a diagnostic for binary classifiers, and ultimately we use these to demonstrate the competitive predictive performance of GLMs against the popular technique of artificial neural networks.Comment: 20 pages, 10 figures, 3 tables, accepted for publication in Astronomy and Computin
    • 

    corecore