43 research outputs found
DISPATCH: An Optimally-Competitive Algorithm for Maximum Online Perfect Bipartite Matching with i.i.d. Arrivals
This work presents an optimally-competitive algorithm for the problem of
maximum weighted online perfect bipartite matching with i.i.d. arrivals. In
this problem, we are given a known set of workers, a distribution over job
types, and non-negative utility weights for each pair of worker and job types.
At each time step, a job is drawn i.i.d. from the distribution over job types.
Upon arrival, the job must be irrevocably assigned to a worker and cannot be
dropped. The goal is to maximize the expected sum of utilities after all jobs
are assigned.
We introduce DISPATCH, a 0.5-competitive, randomized algorithm. We also prove
that 0.5-competitive is the best possible. DISPATCH first selects a "preferred
worker" and assigns the job to this worker if it is available. The preferred
worker is determined based on an optimal solution to a fractional
transportation problem. If the preferred worker is not available, DISPATCH
randomly selects a worker from the available workers. We show that DISPATCH
maintains a uniform distribution over the workers even when the distribution
over the job types is non-uniform
Secretary and Online Matching Problems with Machine Learned Advice
The classical analysis of online algorithms, due to its worst-case nature, can be quite pessimistic when the input instance at hand is far from worst-case. Often this is not an issue with machine learning approaches, which shine in exploiting patterns in past inputs in order to predict the future. However, such predictions, although usually accurate, can be arbitrarily poor. Inspired by a recent line of work, we augment three well-known online settings with machine learned predictions about the future, and develop algorithms that take them into account. In particular, we study the following online selection problems: (i) the classical secretary problem, (ii) online bipartite matching and (iii) the graphic matroid secretary problem. Our algorithms still come with a worst-case performance guarantee in the case that predictions are subpar while obtaining an improved competitive ratio (over the best-known classical online algorithm for each problem) when the predictions are sufficiently accurate. For each algorithm, we establish a trade-off between the competitive ratios obtained in the two respective cases
Secretary and Online Matching Problems with Machine Learned Advice
The classical analysis of online algorithms, due to its worst-case nature,
can be quite pessimistic when the input instance at hand is far from
worst-case. Often this is not an issue with machine learning approaches, which
shine in exploiting patterns in past inputs in order to predict the future.
However, such predictions, although usually accurate, can be arbitrarily poor.
Inspired by a recent line of work, we augment three well-known online settings
with machine learned predictions about the future, and develop algorithms that
take them into account. In particular, we study the following online selection
problems: (i) the classical secretary problem, (ii) online bipartite matching
and (iii) the graphic matroid secretary problem. Our algorithms still come with
a worst-case performance guarantee in the case that predictions are subpar
while obtaining an improved competitive ratio (over the best-known classical
online algorithm for each problem) when the predictions are sufficiently
accurate. For each algorithm, we establish a trade-off between the competitive
ratios obtained in the two respective cases
Online Stochastic Matching with Edge Arrivals
Online bipartite matching with edge arrivals remained a major open question for a long time until a recent negative result by Gamlath et al., who showed that no online policy is better than the straightforward greedy algorithm, i.e., no online algorithm has a worst-case competitive ratio better than 0.5. In this work, we consider the bipartite matching problem with edge arrivals in a natural stochastic framework, i.e., Bayesian setting where each edge of the graph is independently realized according to a known probability distribution.
We focus on a natural class of prune & greedy online policies motivated by practical considerations from a multitude of online matching platforms. Any prune & greedy algorithm consists of two stages: first, it decreases the probabilities of some edges in the stochastic instance and then runs greedy algorithm on the pruned graph. We propose prune & greedy algorithms that are 0.552-competitive on the instances that can be pruned to a 2-regular stochastic bipartite graph, and 0.503-competitive on arbitrary stochastic bipartite graphs. The algorithms and our analysis significantly deviate from the prior work. We first obtain analytically manageable lower bound on the size of the matching, which leads to a non-linear optimization problem. We further reduce this problem to a continuous optimization with a constant number of parameters that can be solved using standard software tools
Dynamic Stochastic Matching Under Limited Time
In centralized matching markets such as car-pooling platforms and kidney exchange schemes, new participants constantly enter the market and remain available for potential matches during a limited period of time. To reach an efficient allocation, the “timing” of the matching decisions is a critical aspect of the platform’s operations. There is a fundamental trade-off between increasing market thickness and mitigating the risk that participants abandon the market. Nonetheless, the dynamic properties of matching markets have been mostly overlooked in the algorithmic literature. In this paper, we introduce a general dynamic matching model over edge-weighted graphs, where the agents’ arrivals and abandonments are stochastic and heterogeneous. Our main contribution is to design simple matching algorithms that admit strong worst-case performance guarantees for a broad class of graphs. In contrast, we show that the performance of widely used batching algorithms can be arbitrarily bad on certain graph-theoretic structures motivated by car-pooling services. Our approach involves the development of a host of new techniques, including linear programming benchmarks, value function approximations, and proxies for continuous-time Markov chains, which may be of broader interest. In extensive experiments, we simulate the matching operations of a car-pooling platform using real-world taxi demand data. The newly developed algorithms can significantly improve cost efficiency against batching algorithms
Recommended from our members
Competition and Yield Optimization in Ad Exchanges
Ad Exchanges are emerging Internet markets where advertisers may purchase display ad placements, in real-time and based on specific viewer information, directly from publishers via a simple auction mechanism. The presence of such channels presents a host of new strategic and tactical questions for publishers. How should the supply of impressions be divided between bilateral contracts and exchanges? How should auctions be designed to maximize profits? What is the role of user information and to what extent should it be disclosed? In this thesis, we develop a novel framework to address some of these questions. We first study how publishers should allocate their inventory in the presence of these new markets when traditional reservation-based ad contracts are available. We then study the competitive landscape that arises in Ad Exchanges and the implications for publishers' decisions. Traditionally, an advertiser would buy display ad placements by negotiating deals directly with a publisher, and signing an agreement, called a guaranteed contract. These deals usually take the form of a specific number of ad impressions reserved over a particular time horizon. In light of the growing market of Ad Exchanges, publishers face new challenges in choosing between the allocation of contract-based reservation ads and spot market ads. In this setting, the publisher should take into account the tradeoff between short-term revenue from an Ad Exchange and the long-term impact of assigning high quality impressions to the reservations (typically measured by the click-through rate). In the first part of this thesis, we formalize this combined optimization problem as a stochastic control problem and derive an efficient policy for online ad allocation in settings with general joint distribution over placement quality and exchange bids, where the exchange bids are assumed to be exogenous and independent of the decisions of the publishers. We prove asymptotic optimality of this policy in terms of any arbitrary trade-off between quality of delivered reservation ads and revenue from the exchange, and provide a bound for its convergence rate to the optimal policy. We also give experimental results on data derived from real publisher inventory, showing that our policy can achieve any Pareto-optimal point on the quality vs. revenue curve. In the second part of this thesis, we relax the assumption of exogenous bids in the Ad Exchange and study in more detail the competitive landscape that arises in Ad Exchanges and the implications for publishers' decisions. Typically, advertisers join these markets with a pre-specified budget and participate in multiple second-price auctions over the length of a campaign. We introduce the novel notion of a Fluid Mean Field Equilibrium (FMFE) to study the dynamic bidding strategies of budget-constrained advertisers in these repeated auctions. This concept is based on a mean field approximation to relax the advertisers' informational requirements, together with a fluid approximation to handle the complex dynamics of the advertisers' control problems. Notably, we are able to derive a closed-form characterization of FMFE, which we use to study the auction design problem from the publisher's perspective focusing on three design decisions: (1) the reserve price; (2) the supply of impressions to the Exchange versus an alternative channel such as bilateral contracts; and (3) the disclosure of viewers' information. Our results provide novel insights with regard to key auction design decisions that publishers face in these markets. In the third part of this thesis, we justify the use of the FMFE as an equilibrium concept in this setting by proving that the FMFE provides a good approximation to the rational behavior of agents in large markets. To do so, we consider a sequence of scaled systems with increasing market size;. In this regime we show that, when all advertisers implement the FMFE strategy, the relative profit obtained from any unilateral deviation that keeps track of all available information in the market becomes negligible as the scale of the market increases. Hence, a FMFE strategy indeed becomes a best response in large markets
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum