113,451 research outputs found

    A Randomness Threshold for Online Bipartite Matching, via Lossless Online Rounding

    Full text link
    Over three decades ago, Karp, Vazirani and Vazirani (STOC'90) introduced the online bipartite matching problem. They observed that deterministic algorithms' competitive ratio for this problem is no greater than 1/21/2, and proved that randomized algorithms can do better. A natural question thus arises: \emph{how random is random}? i.e., how much randomness is needed to outperform deterministic algorithms? The \textsc{ranking} algorithm of Karp et al.~requires O~(n)\tilde{O}(n) random bits, which, ignoring polylog terms, remained unimproved. On the other hand, Pena and Borodin (TCS'19) established a lower bound of (1o(1))loglogn(1-o(1))\log\log n random bits for any 1/2+Ω(1)1/2+\Omega(1) competitive ratio. We close this doubly-exponential gap, proving that, surprisingly, the lower bound is tight. In fact, we prove a \emph{sharp threshold} of (1±o(1))loglogn(1\pm o(1))\log\log n random bits for the randomness necessary and sufficient to outperform deterministic algorithms for this problem, as well as its vertex-weighted generalization. This implies the same threshold for the advice complexity (nondeterminism) of these problems. Similar to recent breakthroughs in the online matching literature, for edge-weighted matching (Fahrbach et al.~FOCS'20) and adwords (Huang et al.~FOCS'20), our algorithms break the barrier of 1/21/2 by randomizing matching choices over two neighbors. Unlike these works, our approach does not rely on the recently-introduced OCS machinery, nor the more established randomized primal-dual method. Instead, our work revisits a highly-successful online design technique, which was nonetheless under-utilized in the area of online matching, namely (lossless) online rounding of fractional algorithms. While this technique is known to be hopeless for online matching in general, we show that it is nonetheless applicable to carefully designed fractional algorithms with additional (non-convex) constraints

    On Conceptually Simple Algorithms for Variants of Online Bipartite Matching

    Full text link
    We present a series of results regarding conceptually simple algorithms for bipartite matching in various online and related models. We first consider a deterministic adversarial model. The best approximation ratio possible for a one-pass deterministic online algorithm is 1/21/2, which is achieved by any greedy algorithm. D\"urr et al. recently presented a 22-pass algorithm called Category-Advice that achieves approximation ratio 3/53/5. We extend their algorithm to multiple passes. We prove the exact approximation ratio for the kk-pass Category-Advice algorithm for all k1k \ge 1, and show that the approximation ratio converges to the inverse of the golden ratio 2/(1+5)0.6182/(1+\sqrt{5}) \approx 0.618 as kk goes to infinity. The convergence is extremely fast --- the 55-pass Category-Advice algorithm is already within 0.01%0.01\% of the inverse of the golden ratio. We then consider a natural greedy algorithm in the online stochastic IID model---MinDegree. This algorithm is an online version of a well-known and extensively studied offline algorithm MinGreedy. We show that MinDegree cannot achieve an approximation ratio better than 11/e1-1/e, which is guaranteed by any consistent greedy algorithm in the known IID model. Finally, following the work in Besser and Poloczek, we depart from an adversarial or stochastic ordering and investigate a natural randomized algorithm (MinRanking) in the priority model. Although the priority model allows the algorithm to choose the input ordering in a general but well defined way, this natural algorithm cannot obtain the approximation of the Ranking algorithm in the ROM model

    Active Learning with Expert Advice

    Get PDF
    Conventional learning with expert advice methods assumes a learner is always receiving the outcome (e.g., class labels) of every incoming training instance at the end of each trial. In real applications, acquiring the outcome from oracle can be costly or time consuming. In this paper, we address a new problem of active learning with expert advice, where the outcome of an instance is disclosed only when it is requested by the online learner. Our goal is to learn an accurate prediction model by asking the oracle the number of questions as small as possible. To address this challenge, we propose a framework of active forecasters for online active learning with expert advice, which attempts to extend two regular forecasters, i.e., Exponentially Weighted Average Forecaster and Greedy Forecaster, to tackle the task of active learning with expert advice. We prove that the proposed algorithms satisfy the Hannan consistency under some proper assumptions, and validate the efficacy of our technique by an extensive set of experiments.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI2013

    Cascading Randomized Weighted Majority: A New Online Ensemble Learning Algorithm

    Full text link
    With the increasing volume of data in the world, the best approach for learning from this data is to exploit an online learning algorithm. Online ensemble methods are online algorithms which take advantage of an ensemble of classifiers to predict labels of data. Prediction with expert advice is a well-studied problem in the online ensemble learning literature. The Weighted Majority algorithm and the randomized weighted majority (RWM) are the most well-known solutions to this problem, aiming to converge to the best expert. Since among some expert, the best one does not necessarily have the minimum error in all regions of data space, defining specific regions and converging to the best expert in each of these regions will lead to a better result. In this paper, we aim to resolve this defect of RWM algorithms by proposing a novel online ensemble algorithm to the problem of prediction with expert advice. We propose a cascading version of RWM to achieve not only better experimental results but also a better error bound for sufficiently large datasets.Comment: 15 pages, 3 figure

    Connexions Direct : user satisfaction survey

    Get PDF
    corecore