311 research outputs found
A maximum entropy approach to the newsvendor problem with partial information
In this paper, we consider the newsvendor model under partial information, i.e., where the demand distribution D is partly unknown. We focus on the classical case where the retailer only knows the expectation and variance of D. The standard approach is then to determine the order quantity using conservative rules such as minimax regret or Scarf's rule. We compute instead the most likely demand distribution in the sense of maximum entropy. We then compare the performance of the maximum entropy approach with minimax regret and Scarf's rule on large samples of randomly drawn demand distributions. We show that the average performance of the maximum entropy approach is considerably better than either alternative, and more surprisingly, that it is in most cases a better hedge against bad results.Newsvendor model; entropy; partial information
Robust Solutions of Optimization Problems Affected by Uncertain Probabilities
In this paper we focus on robust linear optimization problems with uncertainty regions defined by ø-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on ø-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization problems in inventory control or finance that involve terms containing moments of random variables, expected utility, etc. We show that the robust counterpart of a linear optimization problem with ø-divergence uncertainty is tractable for most of the choices of ø typically considered in the literature. We extend the results to problems that are nonlinear in the optimization variables. Several applications, including an asset pricing example and a numerical multi-item newsvendor example, illustrate the relevance of the proposed approach.robust optimization;ø-divergence;goodness-of-fit statistics
Optimal No-regret Learning in Repeated First-price Auctions
We study online learning in repeated first-price auctions with censored
feedback, where a bidder, only observing the winning bid at the end of each
auction, learns to adaptively bid in order to maximize her cumulative payoff.
To achieve this goal, the bidder faces a challenging dilemma: if she wins the
bid--the only way to achieve positive payoffs--then she is not able to observe
the highest bid of the other bidders, which we assume is iid drawn from an
unknown distribution. This dilemma, despite being reminiscent of the
exploration-exploitation trade-off in contextual bandits, cannot directly be
addressed by the existing UCB or Thompson sampling algorithms in that
literature, mainly because contrary to the standard bandits setting, when a
positive reward is obtained here, nothing about the environment can be learned.
In this paper, by exploiting the structural properties of first-price
auctions, we develop the first learning algorithm that achieves
regret bound when the bidder's private values are
stochastically generated. We do so by providing an algorithm on a general class
of problems, which we call monotone group contextual bandits, where the same
regret bound is established under stochastically generated contexts. Further,
by a novel lower bound argument, we characterize an lower
bound for the case where the contexts are adversarially generated, thus
highlighting the impact of the contexts generation mechanism on the fundamental
learning limit. Despite this, we further exploit the structure of first-price
auctions and develop a learning algorithm that operates sample-efficiently (and
computationally efficiently) in the presence of adversarially generated private
values. We establish an regret bound for this algorithm,
hence providing a complete characterization of optimal learning guarantees for
this problem
Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss
When building datasets, one needs to invest time, money and energy to either
aggregate more data or to improve their quality. The most common practice
favors quantity over quality without necessarily quantifying the trade-off that
emerges. In this work, we study data-driven contextual decision-making and the
performance implications of quality and quantity of data. We focus on
contextual decision-making with a Newsvendor loss. This loss is that of a
central capacity planning problem in Operations Research, but also that
associated with quantile regression. We consider a model in which outcomes
observed in similar contexts have similar distributions and analyze the
performance of a classical class of kernel policies which weigh data according
to their similarity in a contextual space. We develop a series of results that
lead to an exact characterization of the worst-case expected regret of these
policies. This exact characterization applies to any sample size and any
observed contexts. The model we develop is flexible, and captures the case of
partially observed contexts. This exact analysis enables to unveil new
structural insights on the learning behavior of uniform kernel methods: i) the
specialized analysis leads to very large improvements in quantification of
performance compared to state of the art general purpose bounds. ii) we show an
important non-monotonicity of the performance as a function of data size not
captured by previous bounds; and iii) we show that in some regimes, a little
increase in the quality of the data can dramatically reduce the amount of
samples required to reach a performance target. All in all, our work
demonstrates that it is possible to quantify in a precise fashion the interplay
of data quality and quantity, and performance in a central problem class. It
also highlights the need for problem specific bounds in order to understand the
trade-offs at play
Aggregate constrained inventory systems with independent multi-product demand: control practices and theoretical limitations
In practice, inventory managers are often confronted with a need to consider one or more aggregate constraints. These aggregate constraints result from available workspace, workforce, maximum investment or target service level. We consider independent multi-item inventory problems with aggregate constraints and one of the following characteristics: deterministic leadtime demand, newsvendor, basestock policy, rQ policy and sS policy. We analyze some recent relevant references and investigate the considered versions of the problem, the proposed model formulations and the algorithmic approaches. Finally we highlight the limitations from a practical viewpoint for these models and point out some possible direction for future improvements
Stochastic Stackelberg equilibria with applications to time dependent newsvendor models
In this paper we prove a sufficient maximum principle for general stochastic differential Stackelberg games, and apply the theory to continuous time newsvendor problems. In the newsvendor problem a manufacturer sells goods to a retailer, and the objective of both parties is to maximize expected profits under a random demand rate. Our demand rate is an Ito-Levy process, and to increase realism information is delayed, e.g., due to production time. We provide complete existence and uniqueness proofs for a series of special cases, including geometric Brownian motion and the Ornstein-Uhlenbeck process, both with time variable coefficients. Moreover, these results are operational because we are able to offer explicit solution formulas. An interesting finding is that more precise information may be a considerable disadvantage for the retailer.Stochastic differential games; newsvendor model; delayed information; Ito-Levy processes
- …