4,224 research outputs found
Optimal Online Edge Coloring of Planar Graphs with Advice
Using the framework of advice complexity, we study the amount of knowledge
about the future that an online algorithm needs to color the edges of a graph
optimally, i.e., using as few colors as possible. For graphs of maximum degree
, it follows from Vizing's Theorem that bits of
advice suffice to achieve optimality, where is the number of edges. We show
that for graphs of bounded degeneracy (a class of graphs including e.g. trees
and planar graphs), only bits of advice are needed to compute an optimal
solution online, independently of how large is. On the other hand, we
show that bits of advice are necessary just to achieve a
competitive ratio better than that of the best deterministic online algorithm
without advice. Furthermore, we consider algorithms which use a fixed number of
advice bits per edge (our algorithm for graphs of bounded degeneracy belongs to
this class of algorithms). We show that for bipartite graphs, any such
algorithm must use at least bits of advice to achieve
optimality.Comment: CIAC 201
The Sampling-and-Learning Framework: A Statistical View of Evolutionary Algorithms
Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open
Fast Algorithm for Partial Covers in Words
A factor of a word is a cover of if every position in lies
within some occurrence of in . A word covered by thus
generalizes the idea of a repetition, that is, a word composed of exact
concatenations of . In this article we introduce a new notion of
-partial cover, which can be viewed as a relaxed variant of cover, that
is, a factor covering at least positions in . We develop a data
structure of size (where ) that can be constructed in time which we apply to compute all shortest -partial covers for a
given . We also employ it for an -time algorithm computing
a shortest -partial cover for each
Partial match queries in relaxed K-dt trees
The study of partial match queries on random hierarchical multidimensional data structures dates back to Ph. Flajolet and C. Puech’s 1986 seminal paper on partial match retrieval. It was not until recently that fixed (as opposed to random) partial match queries were studied for random relaxed K-d trees, random standard K-d trees, and random 2-dimensional quad trees. Based on those results it seemed
natural to classify the general form of the cost of fixed partial match queries into two families: that of either random hierarchical structures or perfectly balanced structures, as conjectured by Duch, Lau and Martínez (On the Cost of Fixed Partial Queries in K-d trees Algorithmica, 75(4):684–723, 2016). Here we show that the conjecture just mentioned does not hold by introducing relaxed K-dt trees and providing the average-case analysis for random partial match queries as well as some advances on the average-case analysis for fixed partial match queries on them. In fact this cost –for fixed partial match queries– does not follow the conjectured forms.Peer ReviewedPostprint (author's final draft
Online Bin Covering: Expectations vs. Guarantees
Bin covering is a dual version of classic bin packing. Thus, the goal is to
cover as many bins as possible, where covering a bin means packing items of
total size at least one in the bin.
For online bin covering, competitive analysis fails to distinguish between
most algorithms of interest; all "reasonable" algorithms have a competitive
ratio of 1/2. Thus, in order to get a better understanding of the combinatorial
difficulties in solving this problem, we turn to other performance measures,
namely relative worst order, random order, and max/max analysis, as well as
analyzing input with restricted or uniformly distributed item sizes. In this
way, our study also supplements the ongoing systematic studies of the relative
strengths of various performance measures.
Two classic algorithms for online bin packing that have natural dual versions
are Harmonic and Next-Fit. Even though the algorithms are quite different in
nature, the dual versions are not separated by competitive analysis. We make
the case that when guarantees are needed, even under restricted input
sequences, dual Harmonic is preferable. In addition, we establish quite robust
theoretical results showing that if items come from a uniform distribution or
even if just the ordering of items is uniformly random, then dual Next-Fit is
the right choice.Comment: IMADA-preprint-c
A -Vertex Kernel for Maximum Internal Spanning Tree
We consider the parameterized version of the maximum internal spanning tree
problem, which, given an -vertex graph and a parameter , asks for a
spanning tree with at least internal vertices. Fomin et al. [J. Comput.
System Sci., 79:1-6] crafted a very ingenious reduction rule, and showed that a
simple application of this rule is sufficient to yield a -vertex kernel.
Here we propose a novel way to use the same reduction rule, resulting in an
improved -vertex kernel. Our algorithm applies first a greedy procedure
consisting of a sequence of local exchange operations, which ends with a
local-optimal spanning tree, and then uses this special tree to find a
reducible structure. As a corollary of our kernel, we obtain a deterministic
algorithm for the problem running in time
Simple optimality proofs for Least Recently Used in the presence of locality of reference
It is well known that competitive analysis yields results that do not reflect the observed performance of online paging algorithms. Many deterministic paging algorithms achieve the same competitive ratio, ranging from inefficient strategies as flush-when-full to the well-performing least-recently-used (LRU). In this paper, we study this fundamental online problem from the viewpoint of stochastic dominance. We give simple proofs that whensequences are drawn from distributions modelling locality of reference, LRU stochastically dominates any other online paging algorithm. As a byproduct, we obtain simple proofs of some earlier results.operations research and management science;
- …