2,803 research outputs found

    Worst case and probabilistic analysis of the 2-Opt algorithm for the TSP

    Get PDF
    2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every p∈N , a family of L p instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0,1]2, where it was shown that the expected number of steps is bounded by O~(n10) for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0,1] d , for an arbitrary d≥2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density ϕ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of O~(n4+1/3⋅ϕ8/3) . When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to O~(n4+1/3−1/d⋅ϕ8/3) . If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by O~(n4−1/d⋅ϕ) . In addition, we prove an upper bound of O(ϕ√d) on the expected approximation factor with respect to all L p metrics. Let us remark that our probabilistic analysis covers as special cases the uniform input model with ϕ=1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with ϕ∼1/σ d

    Fast Algorithm for Partial Covers in Words

    Get PDF
    A factor uu of a word ww is a cover of ww if every position in ww lies within some occurrence of uu in ww. A word ww covered by uu thus generalizes the idea of a repetition, that is, a word composed of exact concatenations of uu. In this article we introduce a new notion of α\alpha-partial cover, which can be viewed as a relaxed variant of cover, that is, a factor covering at least α\alpha positions in ww. We develop a data structure of O(n)O(n) size (where n=wn=|w|) that can be constructed in O(nlogn)O(n\log n) time which we apply to compute all shortest α\alpha-partial covers for a given α\alpha. We also employ it for an O(nlogn)O(n\log n)-time algorithm computing a shortest α\alpha-partial cover for each α=1,2,,n\alpha=1,2,\ldots,n

    Simple optimality proofs for Least Recently Used in the presence of locality of reference

    Get PDF
    It is well known that competitive analysis yields results that do not reflect the observed performance of online paging algorithms. Many deterministic paging algorithms achieve the same competitive ratio, ranging from inefficient strategies as flush-when-full to the well-performing least-recently-used (LRU). In this paper, we study this fundamental online problem from the viewpoint of stochastic dominance. We give simple proofs that whensequences are drawn from distributions modelling locality of reference, LRU stochastically dominates any other online paging algorithm. As a byproduct, we obtain simple proofs of some earlier results.operations research and management science;

    Online Bin Covering: Expectations vs. Guarantees

    Full text link
    Bin covering is a dual version of classic bin packing. Thus, the goal is to cover as many bins as possible, where covering a bin means packing items of total size at least one in the bin. For online bin covering, competitive analysis fails to distinguish between most algorithms of interest; all "reasonable" algorithms have a competitive ratio of 1/2. Thus, in order to get a better understanding of the combinatorial difficulties in solving this problem, we turn to other performance measures, namely relative worst order, random order, and max/max analysis, as well as analyzing input with restricted or uniformly distributed item sizes. In this way, our study also supplements the ongoing systematic studies of the relative strengths of various performance measures. Two classic algorithms for online bin packing that have natural dual versions are Harmonic and Next-Fit. Even though the algorithms are quite different in nature, the dual versions are not separated by competitive analysis. We make the case that when guarantees are needed, even under restricted input sequences, dual Harmonic is preferable. In addition, we establish quite robust theoretical results showing that if items come from a uniform distribution or even if just the ordering of items is uniformly random, then dual Next-Fit is the right choice.Comment: IMADA-preprint-c
    corecore