113,451 research outputs found
A Randomness Threshold for Online Bipartite Matching, via Lossless Online Rounding
Over three decades ago, Karp, Vazirani and Vazirani (STOC'90) introduced the
online bipartite matching problem. They observed that deterministic algorithms'
competitive ratio for this problem is no greater than , and proved that
randomized algorithms can do better. A natural question thus arises: \emph{how
random is random}? i.e., how much randomness is needed to outperform
deterministic algorithms? The \textsc{ranking} algorithm of Karp et
al.~requires random bits, which, ignoring polylog terms,
remained unimproved. On the other hand, Pena and Borodin (TCS'19) established a
lower bound of random bits for any
competitive ratio.
We close this doubly-exponential gap, proving that, surprisingly, the lower
bound is tight. In fact, we prove a \emph{sharp threshold} of random bits for the randomness necessary and sufficient to
outperform deterministic algorithms for this problem, as well as its
vertex-weighted generalization. This implies the same threshold for the advice
complexity (nondeterminism) of these problems.
Similar to recent breakthroughs in the online matching literature, for
edge-weighted matching (Fahrbach et al.~FOCS'20) and adwords (Huang et
al.~FOCS'20), our algorithms break the barrier of by randomizing matching
choices over two neighbors. Unlike these works, our approach does not rely on
the recently-introduced OCS machinery, nor the more established randomized
primal-dual method. Instead, our work revisits a highly-successful online
design technique, which was nonetheless under-utilized in the area of online
matching, namely (lossless) online rounding of fractional algorithms. While
this technique is known to be hopeless for online matching in general, we show
that it is nonetheless applicable to carefully designed fractional algorithms
with additional (non-convex) constraints
On Conceptually Simple Algorithms for Variants of Online Bipartite Matching
We present a series of results regarding conceptually simple algorithms for
bipartite matching in various online and related models. We first consider a
deterministic adversarial model. The best approximation ratio possible for a
one-pass deterministic online algorithm is , which is achieved by any
greedy algorithm. D\"urr et al. recently presented a -pass algorithm called
Category-Advice that achieves approximation ratio . We extend their
algorithm to multiple passes. We prove the exact approximation ratio for the
-pass Category-Advice algorithm for all , and show that the
approximation ratio converges to the inverse of the golden ratio
as goes to infinity. The convergence is
extremely fast --- the -pass Category-Advice algorithm is already within
of the inverse of the golden ratio.
We then consider a natural greedy algorithm in the online stochastic IID
model---MinDegree. This algorithm is an online version of a well-known and
extensively studied offline algorithm MinGreedy. We show that MinDegree cannot
achieve an approximation ratio better than , which is guaranteed by any
consistent greedy algorithm in the known IID model.
Finally, following the work in Besser and Poloczek, we depart from an
adversarial or stochastic ordering and investigate a natural randomized
algorithm (MinRanking) in the priority model. Although the priority model
allows the algorithm to choose the input ordering in a general but well defined
way, this natural algorithm cannot obtain the approximation of the Ranking
algorithm in the ROM model
Active Learning with Expert Advice
Conventional learning with expert advice methods assumes a learner is always
receiving the outcome (e.g., class labels) of every incoming training instance
at the end of each trial. In real applications, acquiring the outcome from
oracle can be costly or time consuming. In this paper, we address a new problem
of active learning with expert advice, where the outcome of an instance is
disclosed only when it is requested by the online learner. Our goal is to learn
an accurate prediction model by asking the oracle the number of questions as
small as possible. To address this challenge, we propose a framework of active
forecasters for online active learning with expert advice, which attempts to
extend two regular forecasters, i.e., Exponentially Weighted Average Forecaster
and Greedy Forecaster, to tackle the task of active learning with expert
advice. We prove that the proposed algorithms satisfy the Hannan consistency
under some proper assumptions, and validate the efficacy of our technique by an
extensive set of experiments.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013
Cascading Randomized Weighted Majority: A New Online Ensemble Learning Algorithm
With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.Comment: 15 pages, 3 figure
- …