177 research outputs found
Interval Selection in the Streaming Model
A set of intervals is independent when the intervals are pairwise disjoint.
In the interval selection problem we are given a set of intervals
and we want to find an independent subset of intervals of largest cardinality.
Let denote the cardinality of an optimal solution. We
discuss the estimation of in the streaming model, where we
only have one-time, sequential access to the input intervals, the endpoints of
the intervals lie in , and the amount of the memory is
constrained.
For intervals of different sizes, we provide an algorithm in the data stream
model that computes an estimate of that, with
probability at least , satisfies . For same-length
intervals, we provide another algorithm in the data stream model that computes
an estimate of that, with probability at
least , satisfies . The space used by our algorithms is bounded
by a polynomial in and . We also show that no better
estimations can be achieved using bits of storage.
We also develop new, approximate solutions to the interval selection problem,
where we want to report a feasible solution, that use
space. Our algorithms for the interval selection problem match the optimal
results by Emek, Halld{\'o}rsson and Ros{\'e}n [Space-Constrained Interval
Selection, ICALP 2012], but are much simpler.Comment: Minor correction
Optimal Algorithms for Free Order Multiple-Choice Secretary
Suppose we are given integer and boxes labeled
by an adversary, each containing a number chosen from an unknown distribution.
We have to choose an order to sequentially open these boxes, and each time we
open the next box in this order, we learn its number. If we reject a number in
a box, the box cannot be recalled. Our goal is to accept the largest of
these numbers, without necessarily opening all boxes. This is the free order
multiple-choice secretary problem. Free order variants were studied extensively
for the secretary and prophet problems. Kesselheim, Kleinberg, and Niazadeh KKN
(STOC'15) initiated a study of randomness-efficient algorithms (with the
cheapest order in terms of used random bits) for the free order secretary
problems.
We present an algorithm for free order multiple-choice secretary, which is
simultaneously optimal for the competitive ratio and used amount of randomness.
I.e., we construct a distribution on orders with optimal entropy
such that a deterministic multiple-threshold algorithm is
-competitive. This improves in three ways the previous
best construction by KKN, whose competitive ratio is .
Our competitive ratio is (near)optimal for the multiple-choice secretary
problem; it works for exponentially larger parameter ; and our algorithm is
a simple deterministic multiple-threshold algorithm, while that in KKN is
randomized. We also prove a corresponding lower bound on the entropy of optimal
solutions for the multiple-choice secretary problem, matching entropy of our
algorithm, where no such previous lower bound was known.
We obtain our algorithmic results with a host of new techniques, and with
these techniques we also improve significantly the previous results of KKN
about constructing entropy-optimal distributions for the classic free order
secretary
Syntactic Separation of Subset Satisfiability Problems
Variants of the Exponential Time Hypothesis (ETH) have been used to derive lower bounds on the time complexity for certain problems, so that the hardness results match long-standing algorithmic results. In this paper, we consider a syntactically defined class of problems, and give conditions for when problems in this class require strongly exponential time to approximate to within a factor of (1-epsilon) for some constant epsilon > 0, assuming the Gap Exponential Time Hypothesis (Gap-ETH), versus when they admit a PTAS. Our class includes a rich set of problems from additive combinatorics, computational geometry, and graph theory. Our hardness results also match the best known algorithmic results for these problems
Local antithetic sampling with scrambled nets
We consider the problem of computing an approximation to the integral
. Monte Carlo (MC) sampling typically attains a root
mean squared error (RMSE) of from independent random function
evaluations. By contrast, quasi-Monte Carlo (QMC) sampling using carefully
equispaced evaluation points can attain the rate for
any and randomized QMC (RQMC) can attain the RMSE
, both under mild conditions on . Classical
variance reduction methods for MC can be adapted to QMC. Published results
combining QMC with importance sampling and with control variates have found
worthwhile improvements, but no change in the error rate. This paper extends
the classical variance reduction method of antithetic sampling and combines it
with RQMC. One such method is shown to bring a modest improvement in the RMSE
rate, attaining for any , for
smooth enough .Comment: Published in at http://dx.doi.org/10.1214/07-AOS548 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Making recommendations bandwidth aware
This paper asks how much we can gain in terms of bandwidth and user
satisfaction, if recommender systems became bandwidth aware and took into
account not only the user preferences, but also the fact that they may need to
serve these users under bandwidth constraints, as is the case over wireless
networks. We formulate this as a new problem in the context of index coding: we
relax the index coding requirements to capture scenarios where each client has
preferences associated with messages. The client is satisfied to receive any
message she does not already have, with a satisfaction proportional to her
preference for that message. We consistently find, over a number of scenarios
we sample, that although the optimization problems are in general NP-hard,
significant bandwidth savings are possible even when restricted to polynomial
time algorithms
- …