10 research outputs found

    Competitive algorithms for online conversion problems with interrelated prices

    Get PDF
    The classical uni-directional conversion algorithms are based on the assumption that prices are arbitrarily chosen from the fixed price interval[m, M] where m and M represent the estimated lower and upper bounds of possible prices 0<m<M. The estimated interval is erroneous and no attempts are made by the algorithms to update the erroneous estimates. We consider a real world setting where prices are interrelated, i.e., each price depends on its preceding price. Under this assumption, we drive a lower bound on the competitive ratio of randomized non-primitive algorithms. Motivated by the fixed and erroneous price bounds, we present an update model that progressively improves the bounds. Based on the update model, we propose a non-preemptive reservations price algorithm RP* and analyze it under competitive analysis. Finally, we report the findings of an experimental study that is conducted over the real world stock index data. We observe that RP* consistently outperforms the classical algorithm

    Fast Algorithms for Online Stochastic Convex Programming

    Full text link
    We introduce the online stochastic Convex Programming (CP) problem, a very general version of stochastic online problems which allows arbitrary concave objectives and convex feasibility constraints. Many well-studied problems like online stochastic packing and covering, online stochastic matching with concave returns, etc. form a special case of online stochastic CP. We present fast algorithms for these problems, which achieve near-optimal regret guarantees for both the i.i.d. and the random permutation models of stochastic inputs. When applied to the special case online packing, our ideas yield a simpler and faster primal-dual algorithm for this well studied problem, which achieves the optimal competitive ratio. Our techniques make explicit the connection of primal-dual paradigm and online learning to online stochastic CP.Comment: To appear in SODA 201

    Constrained Online Two-stage Stochastic Optimization: Algorithm with (and without) Predictions

    Full text link
    We consider an online two-stage stochastic optimization with long-term constraints over a finite horizon of TT periods. At each period, we take the first-stage action, observe a model parameter realization and then take the second-stage action from a feasible set that depends both on the first-stage decision and the model parameter. We aim to minimize the cumulative objective value while guaranteeing that the long-term average second-stage decision belongs to a set. We develop online algorithms for the online two-stage problem from adversarial learning algorithms. Also, the regret bound of our algorithm can be reduced to the regret bound of embedded adversarial learning algorithms. Based on this framework, we obtain new results under various settings. When the model parameters are drawn from unknown non-stationary distributions and we are given machine-learned predictions of the distributions, we develop a new algorithm from our framework with a regret O(WT+T)O(W_T+\sqrt{T}), where WTW_T measures the total inaccuracy of the machine-learned predictions. We then develop another algorithm that works when no machine-learned predictions are given and show the performances.Comment: arXiv admin note: substantial text overlap with arXiv:2302.0099

    Secretary and online matching problems with machine learned advice

    Get PDF
    The classic analysis of online algorithms, due to its worst-case nature, can be quite pessimistic when the input instance at hand is far from worst-case. In contrast, machine learning approaches shine in exploiting patterns in past inputs in order to predict the future. However, such predictions, although usually accurate, can be arbitrarily poor. Inspired by a recent line of work, we augment three well-known online settings with machine learned predictions about the future, and develop algorithms that take these predictions into account. In particular, we study the following online selection problems: (i) the classic secretary problem, (ii) online bipartite matching and (iii) the graphic matroid secretary problem. Our algorithms still come with a worst-case performance guarantee in the case that predictions are subpar while obtaining an improved competitive ratio (over the best-known classic online algorithm for each problem) when the predictions are sufficiently accurate. For each algorithm, we establish a trade-off between the competitive ratios obtained in the two respective cases

    Secretary and Online Matching Problems with Machine Learned Advice

    Get PDF
    The classical analysis of online algorithms, due to its worst-case nature, can be quite pessimistic when the input instance at hand is far from worst-case. Often this is not an issue with machine learning approaches, which shine in exploiting patterns in past inputs in order to predict the future. However, such predictions, although usually accurate, can be arbitrarily poor. Inspired by a recent line of work, we augment three well-known online settings with machine learned predictions about the future, and develop algorithms that take them into account. In particular, we study the following online selection problems: (i) the classical secretary problem, (ii) online bipartite matching and (iii) the graphic matroid secretary problem. Our algorithms still come with a worst-case performance guarantee in the case that predictions are subpar while obtaining an improved competitive ratio (over the best-known classical online algorithm for each problem) when the predictions are sufficiently accurate. For each algorithm, we establish a trade-off between the competitive ratios obtained in the two respective cases

    Semi-online algorithms for power-aware load balancing

    Get PDF

    Online Optimization with Uncertain Information ∗

    No full text
    We introduce a new framework for designing online algorithms that can incorporate additional information about the input sequence, while maintaining a reasonable competitive ratio if the additional information was incorrect. Within this framework, we present online algorithms for several problems including allocation of online advertisement space, load balancing, and facility location.

    Online Optimization with Uncertain Information

    No full text
    corecore