175 research outputs found

    A maximum entropy approach to the newsvendor problem with partial information

    Get PDF
    In this paper, we consider the newsvendor model under partial information, i.e., where the demand distribution D is partly unknown. We focus on the classical case where the retailer only knows the expectation and variance of D. The standard approach is then to determine the order quantity using conservative rules such as minimax regret or Scarf's rule. We compute instead the most likely demand distribution in the sense of maximum entropy. We then compare the performance of the maximum entropy approach with minimax regret and Scarf's rule on large samples of randomly drawn demand distributions. We show that the average performance of the maximum entropy approach is considerably better than either alternative, and more surprisingly, that it is in most cases a better hedge against bad results.Newsvendor model; entropy; partial information

    Confidence-based Optimization for the Newsvendor Problem

    Get PDF
    We introduce a novel strategy to address the issue of demand estimation in single-item single-period stochastic inventory optimisation problems. Our strategy analytically combines confidence interval analysis and inventory optimisation. We assume that the decision maker is given a set of past demand samples and we employ confidence interval analysis in order to identify a range of candidate order quantities that, with prescribed confidence probability, includes the real optimal order quantity for the underlying stochastic demand process with unknown stationary parameter(s). In addition, for each candidate order quantity that is identified, our approach can produce an upper and a lower bound for the associated cost. We apply our novel approach to three demand distribution in the exponential family: binomial, Poisson, and exponential. For two of these distributions we also discuss the extension to the case of unobserved lost sales. Numerical examples are presented in which we show how our approach complements existing frequentist - e.g. based on maximum likelihood estimators - or Bayesian strategies.Comment: Working draf

    Robust Solutions of Optimization Problems Affected by Uncertain Probabilities

    Get PDF
    In this paper we focus on robust linear optimization problems with uncertainty regions defined by ø-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on ø-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization problems in inventory control or finance that involve terms containing moments of random variables, expected utility, etc. We show that the robust counterpart of a linear optimization problem with ø-divergence uncertainty is tractable for most of the choices of ø typically considered in the literature. We extend the results to problems that are nonlinear in the optimization variables. Several applications, including an asset pricing example and a numerical multi-item newsvendor example, illustrate the relevance of the proposed approach.robust optimization;ø-divergence;goodness-of-fit statistics

    TECHNICAL NOTE—Robust Newsvendor Competition Under Asymmetric Information

    Get PDF
    We generalize analysis of competition among newsvendors to a setting in which competitors possess asymmetric information about future demand realizations, and this information is limited to knowledge of the support of demand distribution. In such a setting, traditional expectation-based optimization criteria are not adequate, and therefore we focus on the alternative criterion used in the robust optimization literature: the absolute regret minimization. We show existence and derive closed-form expressions for the robust optimization Nash equilibrium solution for a game with an arbitrary number of players. This solution allows us to gain insight into the nature of robust asymmetric newsvendor competition. We show that the competitive solution in the presence of information asymmetry is an intuitive extension of the robust solution for the monopolistic newsvendor problem, which allows us to distill the impact of both competition and information asymmetry. In addition, we show that, contrary to the intuition, a competing newsvendor does not necessarily benefit from having better information about its own demand distribution than its competitor has

    Bootstrap Robust Prescriptive Analytics

    Full text link
    We address the problem of prescribing an optimal decision in a framework where its cost depends on uncertain problem parameters YY that need to be learned from data. Earlier work by Bertsimas and Kallus (2014) transforms classical machine learning methods that merely predict YY from supervised training data [(x1,y1),,(xn,yn)][(x_1, y_1), \dots, (x_n, y_n)] into prescriptive methods taking optimal decisions specific to a particular covariate context X=xˉX=\bar x. Their prescriptive methods factor in additional observed contextual information on a potentially large number of covariates X=xˉX=\bar x to take context specific actions z(xˉ)z(\bar x) which are superior to any static decision zz. Any naive use of limited training data may, however, lead to gullible decisions over-calibrated to one particular data set. In this paper, we borrow ideas from distributionally robust optimization and the statistical bootstrap of Efron (1982) to propose two novel prescriptive methods based on (nw) Nadaraya-Watson and (nn) nearest-neighbors learning which safeguard against overfitting and lead to improved out-of-sample performance. Both resulting robust prescriptive methods reduce to tractable convex optimization problems and enjoy a limited disappointment on bootstrap data. We illustrate the data-driven decision-making framework and our novel robustness notion on a small news vendor problem as well as a small portfolio allocation problem

    How Big Should Your Data Really Be? Data-Driven Newsvendor and the Transient of Learning

    Full text link
    We study the classical newsvendor problem in which the decision-maker must trade-off underage and overage costs. In contrast to the typical setting, we assume that the decision-maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient of learning, i.e., performance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared to an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical Sample Average Approximation (SAA) algorithm for this class of problems across all data sizes. This allows to uncover novel fundamental insights on the value of data: it reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm as well as characterize its associated performance. This leads to significant improvements for limited data sizes, and allows to exactly quantify the value of historical information

    Practice-driven solutions for inventory management problems in data-scarce environments

    Full text link
    Many firms are challenged to make inventory decisions with limited data, and high customer service level requirements. This thesis focuses on heuristic solutions for inventory management problems in data-scarce environments, employing rigorous mathematical frameworks and taking advantage of the information that is available in practice but often ignored in literature. We define a class of inventory models and solutions with demonstrable value in helping firms solve these challenges
    corecore