4,224 research outputs found

    Optimal Online Edge Coloring of Planar Graphs with Advice

    Full text link
    Using the framework of advice complexity, we study the amount of knowledge about the future that an online algorithm needs to color the edges of a graph optimally, i.e., using as few colors as possible. For graphs of maximum degree Δ\Delta, it follows from Vizing's Theorem that O(mlogΔ)O(m\log \Delta) bits of advice suffice to achieve optimality, where mm is the number of edges. We show that for graphs of bounded degeneracy (a class of graphs including e.g. trees and planar graphs), only O(m)O(m) bits of advice are needed to compute an optimal solution online, independently of how large Δ\Delta is. On the other hand, we show that Ω(m)\Omega (m) bits of advice are necessary just to achieve a competitive ratio better than that of the best deterministic online algorithm without advice. Furthermore, we consider algorithms which use a fixed number of advice bits per edge (our algorithm for graphs of bounded degeneracy belongs to this class of algorithms). We show that for bipartite graphs, any such algorithm must use at least Ω(mlogΔ)\Omega(m\log \Delta) bits of advice to achieve optimality.Comment: CIAC 201

    The Sampling-and-Learning Framework: A Statistical View of Evolutionary Algorithms

    Full text link
    Evolutionary algorithms (EAs), a large class of general purpose optimization algorithms inspired from the natural phenomena, are widely used in various industrial optimizations and often show excellent performance. This paper presents an attempt towards revealing their general power from a statistical view of EAs. By summarizing a large range of EAs into the sampling-and-learning framework, we show that the framework directly admits a general analysis on the probable-absolute-approximate (PAA) query complexity. We particularly focus on the framework with the learning subroutine being restricted as a binary classification, which results in the sampling-and-classification (SAC) algorithms. With the help of the learning theory, we obtain a general upper bound on the PAA query complexity of SAC algorithms. We further compare SAC algorithms with the uniform search in different situations. Under the error-target independence condition, we show that SAC algorithms can achieve polynomial speedup to the uniform search, but not super-polynomial speedup. Under the one-side-error condition, we show that super-polynomial speedup can be achieved. This work only touches the surface of the framework. Its power under other conditions is still open

    Fast Algorithm for Partial Covers in Words

    Get PDF
    A factor uu of a word ww is a cover of ww if every position in ww lies within some occurrence of uu in ww. A word ww covered by uu thus generalizes the idea of a repetition, that is, a word composed of exact concatenations of uu. In this article we introduce a new notion of α\alpha-partial cover, which can be viewed as a relaxed variant of cover, that is, a factor covering at least α\alpha positions in ww. We develop a data structure of O(n)O(n) size (where n=wn=|w|) that can be constructed in O(nlogn)O(n\log n) time which we apply to compute all shortest α\alpha-partial covers for a given α\alpha. We also employ it for an O(nlogn)O(n\log n)-time algorithm computing a shortest α\alpha-partial cover for each α=1,2,,n\alpha=1,2,\ldots,n

    Partial match queries in relaxed K-dt trees

    Get PDF
    The study of partial match queries on random hierarchical multidimensional data structures dates back to Ph. Flajolet and C. Puech’s 1986 seminal paper on partial match retrieval. It was not until recently that fixed (as opposed to random) partial match queries were studied for random relaxed K-d trees, random standard K-d trees, and random 2-dimensional quad trees. Based on those results it seemed natural to classify the general form of the cost of fixed partial match queries into two families: that of either random hierarchical structures or perfectly balanced structures, as conjectured by Duch, Lau and Martínez (On the Cost of Fixed Partial Queries in K-d trees Algorithmica, 75(4):684–723, 2016). Here we show that the conjecture just mentioned does not hold by introducing relaxed K-dt trees and providing the average-case analysis for random partial match queries as well as some advances on the average-case analysis for fixed partial match queries on them. In fact this cost –for fixed partial match queries– does not follow the conjectured forms.Peer ReviewedPostprint (author's final draft

    Online Bin Covering: Expectations vs. Guarantees

    Full text link
    Bin covering is a dual version of classic bin packing. Thus, the goal is to cover as many bins as possible, where covering a bin means packing items of total size at least one in the bin. For online bin covering, competitive analysis fails to distinguish between most algorithms of interest; all "reasonable" algorithms have a competitive ratio of 1/2. Thus, in order to get a better understanding of the combinatorial difficulties in solving this problem, we turn to other performance measures, namely relative worst order, random order, and max/max analysis, as well as analyzing input with restricted or uniformly distributed item sizes. In this way, our study also supplements the ongoing systematic studies of the relative strengths of various performance measures. Two classic algorithms for online bin packing that have natural dual versions are Harmonic and Next-Fit. Even though the algorithms are quite different in nature, the dual versions are not separated by competitive analysis. We make the case that when guarantees are needed, even under restricted input sequences, dual Harmonic is preferable. In addition, we establish quite robust theoretical results showing that if items come from a uniform distribution or even if just the ordering of items is uniformly random, then dual Next-Fit is the right choice.Comment: IMADA-preprint-c

    A 2k2k-Vertex Kernel for Maximum Internal Spanning Tree

    Full text link
    We consider the parameterized version of the maximum internal spanning tree problem, which, given an nn-vertex graph and a parameter kk, asks for a spanning tree with at least kk internal vertices. Fomin et al. [J. Comput. System Sci., 79:1-6] crafted a very ingenious reduction rule, and showed that a simple application of this rule is sufficient to yield a 3k3k-vertex kernel. Here we propose a novel way to use the same reduction rule, resulting in an improved 2k2k-vertex kernel. Our algorithm applies first a greedy procedure consisting of a sequence of local exchange operations, which ends with a local-optimal spanning tree, and then uses this special tree to find a reducible structure. As a corollary of our kernel, we obtain a deterministic algorithm for the problem running in time 4knO(1)4^k \cdot n^{O(1)}

    Simple optimality proofs for Least Recently Used in the presence of locality of reference

    Get PDF
    It is well known that competitive analysis yields results that do not reflect the observed performance of online paging algorithms. Many deterministic paging algorithms achieve the same competitive ratio, ranging from inefficient strategies as flush-when-full to the well-performing least-recently-used (LRU). In this paper, we study this fundamental online problem from the viewpoint of stochastic dominance. We give simple proofs that whensequences are drawn from distributions modelling locality of reference, LRU stochastically dominates any other online paging algorithm. As a byproduct, we obtain simple proofs of some earlier results.operations research and management science;
    corecore