59 research outputs found

    How Big Should Your Data Really Be? Data-Driven Newsvendor and the Transient of Learning

    Full text link
    We study the classical newsvendor problem in which the decision-maker must trade-off underage and overage costs. In contrast to the typical setting, we assume that the decision-maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient of learning, i.e., performance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared to an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical Sample Average Approximation (SAA) algorithm for this class of problems across all data sizes. This allows to uncover novel fundamental insights on the value of data: it reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm as well as characterize its associated performance. This leads to significant improvements for limited data sizes, and allows to exactly quantify the value of historical information

    Revenue Optimization for a Make-to-Order Queue in an Uncertain Market Environment

    Get PDF
    We consider a revenue-maximizing make-to-order manufacturer that serves a market of price- and delay-sensitive customers and operates in an environment in which the market size varies stochastically over time. A key feature of our analysis is that no model is assumed for the evolution of the market size. We analyze two main settings: (i) the size of the market is observable at any point in time; and (ii) the size of the market is not observable and hence cannot be used for decision making. We focus on high-volume systems that are characterized by large processing capacities and market sizes, and where the latter fluctuate on a slower timescale than that of the underlying production system dynamics. We develop an approach to tackle such problems that is based on an asymptotic analysis and that yields near-optimal policy recommendations for the original system via the solution of a stochastic fluid model

    Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss

    Full text link
    When building datasets, one needs to invest time, money and energy to either aggregate more data or to improve their quality. The most common practice favors quantity over quality without necessarily quantifying the trade-off that emerges. In this work, we study data-driven contextual decision-making and the performance implications of quality and quantity of data. We focus on contextual decision-making with a Newsvendor loss. This loss is that of a central capacity planning problem in Operations Research, but also that associated with quantile regression. We consider a model in which outcomes observed in similar contexts have similar distributions and analyze the performance of a classical class of kernel policies which weigh data according to their similarity in a contextual space. We develop a series of results that lead to an exact characterization of the worst-case expected regret of these policies. This exact characterization applies to any sample size and any observed contexts. The model we develop is flexible, and captures the case of partially observed contexts. This exact analysis enables to unveil new structural insights on the learning behavior of uniform kernel methods: i) the specialized analysis leads to very large improvements in quantification of performance compared to state of the art general purpose bounds. ii) we show an important non-monotonicity of the performance as a function of data size not captured by previous bounds; and iii) we show that in some regimes, a little increase in the quality of the data can dramatically reduce the amount of samples required to reach a performance target. All in all, our work demonstrates that it is possible to quantify in a precise fashion the interplay of data quality and quantity, and performance in a central problem class. It also highlights the need for problem specific bounds in order to understand the trade-offs at play

    Dynamic Pricing Without Knowing the Demand Function: Risk Bounds and Near-Optimal Algorithms

    Get PDF
    We consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues. Realized demand is observed over time, but the underlying functional relationship between price and mean demand rate that governs these observations (otherwise known as the demand function or demand curve) is not known. We consider two instances of this problem: (i) a setting where the demand function is assumed to belong to a known parametric family with unknown parameter values; and (ii) a setting where the demand function is assumed to belong to a broad class of functions that need not admit any parametric representation. In each case we develop policies that learn the demand function “on the fly,” and optimize prices based on that. The performance of these algorithms is measured in terms of the regret: the revenue loss relative to the maximal revenues that can be extracted when the demand function is known prior to the start of the selling season. We derive lower bounds on the regret that hold for any admissible pricing policy, and then show that our proposed algorithms achieve a regret that is “close” to this lower bound. The magnitude of the regret can be interpreted as the economic value of prior knowledge on the demand function, manifested as the revenue loss due to model uncertainty

    Going Bunkers: The Joint Route Selection and Refueling Problem

    Get PDF
    Managing shipping vessel profitability is a central problem in marine transportation. We consider two commonly used types of vessels—“liners” (ships whose routes are fixed in advance) and “trampers” (ships for which future route components are selected based on available shipping jobs)—and formulate a vessel profit maximization problem as a stochastic dynamic program. For liner vessels, the profit maximization reduces to the problem of minimizing refueling costs over a given route subject to random fuel prices and limited vessel fuel capacity. Under mild assumptions about the stochastic dynamics of fuel prices at different ports, we provide a characterization of the structural properties of the optimal liner refueling policies. For trampers, the vessel profit maximization combines refueling decisions and route selection, which adds a combinatorial aspect to the problem. We characterize the optimal policy in special cases where prices are constant through time and do not differ across ports and prices are constant through time and differ across ports. The structure of the optimal policy in such special cases yields insights on the complexity of the problem and also guides the construction of heuristics for the general problem setting

    Testing the Validity of a Demand Model: An Operations Perspective

    Get PDF
    The fields of statistics and econometrics have developed powerful methods for testing the validity (specification) of a model based on its fit to underlying data. Unlike statisticians, managers are typically more interested in the performance of a decision rather than the statistical validity of the underlying model. We propose a framework and a statistical test that incorporate decision performance into a measure of statistical validity. Under general conditions on the objective function, asymptotic behavior of our test admits a sharp and simple characterization. We develop our approach in a revenue management setting and apply the test to a data set used to optimize prices for consumer loans. We show that traditional model-based goodness-of-fit tests may consistently reject simple parametric models of consumer response (e.g., the ubiquitous logit model), while at the same time these models may “pass” the proposed performance-based test. Such situations arise when decisions derived from a postulated (and possibly incorrect) model generate results that cannot be distinguished statistically from the best achievable performance—i.e., when demand relationships are fully known

    Contextual Inverse Optimization: Offline and Online Learning

    Full text link
    We study the problems of offline and online contextual optimization with feedback information, where instead of observing the loss, we observe, after-the-fact, the optimal action an oracle with full knowledge of the objective function would have taken. We aim to minimize regret, which is defined as the difference between our losses and the ones incurred by an all-knowing oracle. In the offline setting, the decision-maker has information available from past periods and needs to make one decision, while in the online setting, the decision-maker optimizes decisions dynamically over time based a new set of feasible actions and contextual functions in each period. For the offline setting, we characterize the optimal minimax policy, establishing the performance that can be achieved as a function of the underlying geometry of the information induced by the data. In the online setting, we leverage this geometric characterization to optimize the cumulative regret. We develop an algorithm that yields the first regret bound for this problem that is logarithmic in the time horizon
    • …
    corecore