3,522 research outputs found

    Worst case and probabilistic analysis of the 2-Opt algorithm for the TSP

    Get PDF
    2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every p∈N , a family of L p instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0,1]2, where it was shown that the expected number of steps is bounded by O~(n10) for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0,1] d , for an arbitrary d≥2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density ϕ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of O~(n4+1/3⋅ϕ8/3) . When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to O~(n4+1/3−1/d⋅ϕ8/3) . If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by O~(n4−1/d⋅ϕ) . In addition, we prove an upper bound of O(ϕ√d) on the expected approximation factor with respect to all L p metrics. Let us remark that our probabilistic analysis covers as special cases the uniform input model with ϕ=1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with ϕ∼1/σ d

    Optimal Online Edge Coloring of Planar Graphs with Advice

    Full text link
    Using the framework of advice complexity, we study the amount of knowledge about the future that an online algorithm needs to color the edges of a graph optimally, i.e., using as few colors as possible. For graphs of maximum degree Δ\Delta, it follows from Vizing's Theorem that O(mlogΔ)O(m\log \Delta) bits of advice suffice to achieve optimality, where mm is the number of edges. We show that for graphs of bounded degeneracy (a class of graphs including e.g. trees and planar graphs), only O(m)O(m) bits of advice are needed to compute an optimal solution online, independently of how large Δ\Delta is. On the other hand, we show that Ω(m)\Omega (m) bits of advice are necessary just to achieve a competitive ratio better than that of the best deterministic online algorithm without advice. Furthermore, we consider algorithms which use a fixed number of advice bits per edge (our algorithm for graphs of bounded degeneracy belongs to this class of algorithms). We show that for bipartite graphs, any such algorithm must use at least Ω(mlogΔ)\Omega(m\log \Delta) bits of advice to achieve optimality.Comment: CIAC 201

    Fast Algorithm for Partial Covers in Words

    Get PDF
    A factor uu of a word ww is a cover of ww if every position in ww lies within some occurrence of uu in ww. A word ww covered by uu thus generalizes the idea of a repetition, that is, a word composed of exact concatenations of uu. In this article we introduce a new notion of α\alpha-partial cover, which can be viewed as a relaxed variant of cover, that is, a factor covering at least α\alpha positions in ww. We develop a data structure of O(n)O(n) size (where n=wn=|w|) that can be constructed in O(nlogn)O(n\log n) time which we apply to compute all shortest α\alpha-partial covers for a given α\alpha. We also employ it for an O(nlogn)O(n\log n)-time algorithm computing a shortest α\alpha-partial cover for each α=1,2,,n\alpha=1,2,\ldots,n

    Online Bin Covering: Expectations vs. Guarantees

    Full text link
    Bin covering is a dual version of classic bin packing. Thus, the goal is to cover as many bins as possible, where covering a bin means packing items of total size at least one in the bin. For online bin covering, competitive analysis fails to distinguish between most algorithms of interest; all "reasonable" algorithms have a competitive ratio of 1/2. Thus, in order to get a better understanding of the combinatorial difficulties in solving this problem, we turn to other performance measures, namely relative worst order, random order, and max/max analysis, as well as analyzing input with restricted or uniformly distributed item sizes. In this way, our study also supplements the ongoing systematic studies of the relative strengths of various performance measures. Two classic algorithms for online bin packing that have natural dual versions are Harmonic and Next-Fit. Even though the algorithms are quite different in nature, the dual versions are not separated by competitive analysis. We make the case that when guarantees are needed, even under restricted input sequences, dual Harmonic is preferable. In addition, we establish quite robust theoretical results showing that if items come from a uniform distribution or even if just the ordering of items is uniformly random, then dual Next-Fit is the right choice.Comment: IMADA-preprint-c

    Probabilistic alternatives for competitive analysis

    Get PDF
    In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: it sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis.operations research and management science;

    Dynamic Balanced Graph Partitioning

    Full text link
    This paper initiates the study of the classic balanced graph partitioning problem from an online perspective: Given an arbitrary sequence of pairwise communication requests between nn nodes, with patterns that may change over time, the objective is to service these requests efficiently by partitioning the nodes into \ell clusters, each of size kk, such that frequently communicating nodes are located in the same cluster. The partitioning can be updated dynamically by migrating nodes between clusters. The goal is to devise online algorithms which jointly minimize the amount of inter-cluster communication and migration cost. The problem features interesting connections to other well-known online problems. For example, scenarios with =2\ell=2 generalize online paging, and scenarios with k=2k=2 constitute a novel online variant of maximum matching. We present several lower bounds and algorithms for settings both with and without cluster-size augmentation. In particular, we prove that any deterministic online algorithm has a competitive ratio of at least kk, even with significant augmentation. Our main algorithmic contributions are an O(klogk)O(k \log{k})-competitive deterministic algorithm for the general setting with constant augmentation, and a constant competitive algorithm for the maximum matching variant
    corecore