9 research outputs found
Online Nash Welfare Maximization Without Predictions
Nash welfare maximization is widely studied because it balances efficiency
and fairness in resource allocation problems. Banerjee, Gkatzelis, Gorokh, and
Jin (2022) recently introduced the model of online Nash welfare maximization
with predictions for divisible items and agents with additive
utilities. They gave online algorithms whose competitive ratios are
logarithmic. We initiate the study of online Nash welfare maximization
\emph{without predictions}, assuming either that the agents' utilities for
receiving all items differ by a bounded ratio, or that their utilities for the
Nash welfare maximizing allocation differ by a bounded ratio. We design online
algorithms whose competitive ratios only depend on the logarithms of the
aforementioned ratios of agents' utilities and the number of agents
Greedy-Based Online Fair Allocation with Adversarial Input: Enabling Best-of-Many-Worlds Guarantees
We study an online allocation problem with sequentially arriving items and
adversarially chosen agent values, with the goal of balancing fairness and
efficiency. Our goal is to study the performance of algorithms that achieve
strong guarantees under other input models such as stochastic inputs, in order
to achieve robust guarantees against a variety of inputs. To that end, we study
the PACE (Pacing According to Current Estimated utility) algorithm, an existing
algorithm designed for stochastic input. We show that in the equal-budgets
case, PACE is equivalent to the integral greedy algorithm. We go on to show
that with natural restrictions on the adversarial input model, both integral
greedy allocation and PACE have asymptotically bounded multiplicative envy as
well as competitive ratio for Nash welfare, with the multiplicative factors
either constant or with optimal order dependence on the number of agents. This
completes a "best-of-many-worlds" guarantee for PACE, since past work showed
that PACE achieves guarantees for stationary and stochastic-but-non-stationary
input models
Competitive Equilibrium for Chores: from Dual Eisenberg-Gale to a Fast, Greedy, LP-based Algorithm
We study the computation of competitive equilibrium for Fisher markets with
agents and divisible chores. Prior work showed that competitive
equilibria correspond to the nonzero KKT points of a non-convex analogue of the
Eisenberg-Gale convex program. We introduce an analogue of the Eisenberg-Gale
dual for chores: we show that all KKT points of this dual correspond to
competitive equilibria, and while it is not a dual of the non-convex primal
program in a formal sense, the objectives touch at all KKT points. Similar to
the primal, the dual has problems from an optimization perspective: there are
many feasible directions where the objective tends to positive infinity. We
then derive a new constraint for the dual, which restricts optimization to a
hyperplane that avoids all these directions. We show that restriction to this
hyperplane retains all KKT points, and surprisingly, does not introduce any new
ones. This allows, for the first time ever, application of iterative
optimization methods over a convex region for computing competitive equilibria
for chores.
We next introduce a greedy Frank-Wolfe algorithm for optimization over our
program and show a state-of-the-art convergence rate to competitive
equilibrium. In the case of equal incomes, we show a rate of convergence, which improves over the two prior
state-of-the-art rates of for an
exterior-point method and for a
combinatorial method. Moreover, our method is significantly simpler: each
iteration of our method only requires solving a simple linear program. We show
through numerical experiments on simulated data and a paper review bidding
dataset that our method is extremely practical. This is the first highly
practical method for solving competitive equilibrium for Fisher markets with
chores.Comment: 25 pages, 17 figure
Proportionally Fair Online Allocation of Public Goods with Predictions
We design online algorithms for the fair allocation of public goods to a set
of agents over a sequence of rounds and focus on improving their
performance using predictions. In the basic model, a public good arrives in
each round, the algorithm learns every agent's value for the good, and must
irrevocably decide the amount of investment in the good without exceeding a
total budget of across all rounds. The algorithm can utilize (potentially
inaccurate) predictions of each agent's total value for all the goods to
arrive. We measure the performance of the algorithm using a proportional
fairness objective, which informally demands that every group of agents be
rewarded in proportion to its size and the cohesiveness of its preferences.
In the special case of binary agent preferences and a unit budget, we show
that proportional fairness can be achieved without using any
predictions, and that this is optimal even if perfectly accurate predictions
were available. However, for general preferences and budget no algorithm can
achieve better than proportional fairness without predictions. We
show that algorithms with (reasonably accurate) predictions can do much better,
achieving proportional fairness. We also extend this
result to a general model in which a batch of public goods arrive in each
round and achieve proportional fairness. Our
exact bounds are parametrized as a function of the error in the predictions and
the performance degrades gracefully with increasing errors
Fairness-aware Network Revenue Management with Demand Learning
In addition to maximizing the total revenue, decision-makers in lots of
industries would like to guarantee fair consumption across different resources
and avoid saturating certain resources. Motivated by these practical needs,
this paper studies the price-based network revenue management problem with both
demand learning and fairness concern about the consumption across different
resources. We introduce the regularized revenue, i.e., the total revenue with a
fairness regularization, as our objective to incorporate fairness into the
revenue maximization goal. We propose a primal-dual-type online policy with the
Upper-Confidence-Bound (UCB) demand learning method to maximize the regularized
revenue. We adopt several innovative techniques to make our algorithm a unified
and computationally efficient framework for the continuous price set and a wide
class of fairness regularizers. Our algorithm achieves a worst-case regret of
, where denotes the number of products and
denotes the number of time periods. Numerical experiments in a few NRM examples
demonstrate the effectiveness of our algorithm for balancing revenue and
fairness
Statistical Inference for Fisher Market Equilibrium
Statistical inference under market equilibrium effects has attracted
increasing attention recently. In this paper we focus on the specific case of
linear Fisher markets. They have been widely use in fair resource allocation of
food/blood donations and budget management in large-scale Internet ad auctions.
In resource allocation, it is crucial to quantify the variability of the
resource received by the agents (such as blood banks and food banks) in
addition to fairness and efficiency properties of the systems. For ad auction
markets, it is important to establish statistical properties of the platform's
revenues in addition to their expected values. To this end, we propose a
statistical framework based on the concept of infinite-dimensional Fisher
markets. In our framework, we observe a market formed by a finite number of
items sampled from an underlying distribution (the "observed market") and aim
to infer several important equilibrium quantities of the underlying long-run
market. These equilibrium quantities include individual utilities, social
welfare, and pacing multipliers. Through the lens of sample average
approximation (SSA), we derive a collection of statistical results and show
that the observed market provides useful statistical information of the
long-run market. In other words, the equilibrium quantities of the observed
market converge to the true ones of the long-run market with strong statistical
guarantees. These include consistency, finite sample bounds, asymptotics, and
confidence. As an extension, we discuss revenue inference in quasilinear Fisher
markets