24 research outputs found

    Observation bias: The impact of demand censoring on newsvendor level and adjustment behavior

    Get PDF
    In an experimental newsvendor setting we investigate three phenomena: Level behavior ? the decision-maker's average ordering tendency; adjustment behavior ? the tendency to adjust period-to-period order quantities; and observation bias ? the tendency to let the degree of demand feedback influence order quantities. We find that the portion of mismatch cost due to adjustment behavior exceeds the portion of mismatch cost due to level behavior in three out of four conditions. Observation bias is studied through censored demand feedback, a situation which arguably represents the majority of newsvendor settings. When demands are uncensored, subjects tend to order below the normative quantity when facing high margin and above the normative quantity when facing low margin, but in neither case beyond mean demand (a.k.a. the pull-to-center effect). Censoring in general leads to lower quantities, magnifying the below-normative level behavior when facing high margin but partially counterbalancing the above-normative level behavior when facing low margin, violating the pull-to-center effect in both cases.

    Making the Newsvendor Smart – Order Quantity Optimization with ANNs for a Bakery Chain

    Get PDF
    Accurate demand forecasting is particularly crucial for products with short shelf life like bakery products. Over- and underestimation of customer demand affects not only profit margins of bakeries but is also responsible for 600,000 metric tons of food waste every year in Germany. To solve this problem, we develop an IT artifact based on artificial neural networks, which is automating the manual order process and capable of reducing costs as well as food waste. To test and evaluate our artifact, we cooperated with an SME bakery chain from Germany. The bakery chain runs 40 points of sale (POS) in southern Germany. After algorithm based reconstructing and cleaning of the censored sales data, we compare two different data-driven newsvendor approaches for this inventory problem. We show that both models are able to significantly improve the forecast quality (cost savings up to 30%) compared to human planners

    How Big Should Your Data Really Be? Data-Driven Newsvendor and the Transient of Learning

    Full text link
    We study the classical newsvendor problem in which the decision-maker must trade-off underage and overage costs. In contrast to the typical setting, we assume that the decision-maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient of learning, i.e., performance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared to an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical Sample Average Approximation (SAA) algorithm for this class of problems across all data sizes. This allows to uncover novel fundamental insights on the value of data: it reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm as well as characterize its associated performance. This leads to significant improvements for limited data sizes, and allows to exactly quantify the value of historical information

    Estimating the demand parameters for single period problem, Markov-modulated Poisson demand, large lot size, and unobserved lost sales

    Get PDF
    We consider a single-period single-item problem when the demand is a Markov-modulated Poisson process with hidden states, unknown intensities and continuous batch size distribution. The number of customers and lot size are assumed to be large enough. The estimators of demand mean and standard deviation for unobservable lost sales in the steady state are considered. The procedures are based on two censored samples: observed selling durations and the demands over the period. Numerical results are given

    Exploration vs. Exploitation in the Information Filtering Problem

    Full text link
    We consider information filtering, in which we face a stream of items too voluminous to process by hand (e.g., scientific articles, blog posts, emails), and must rely on a computer system to automatically filter out irrelevant items. Such systems face the exploration vs. exploitation tradeoff, in which it may be beneficial to present an item despite a low probability of relevance, just to learn about future items with similar content. We present a Bayesian sequential decision-making model of this problem, show how it may be solved to optimality using a decomposition to a collection of two-armed bandit problems, and show structural results for the optimal policy. We show that the resulting method is especially useful when facing the cold start problem, i.e., when filtering items for new users without a long history of past interactions. We then present an application of this information filtering method to a historical dataset from the arXiv.org repository of scientific articles.Comment: 36 pages, 5 figure

    Evaluating alternative frequentist inferential approaches for optimal order quantities in the newsvendor model under exponential demand

    Get PDF
    Three estimation policies for the optimal order quantity of the classical newsvendor model under exponential demand are evaluated in the current paper. According to the principle of the first estimation policy, the corresponding estimator is obtained replacing in the theoretical formula which gives the optimal order quantity the parameter of exponential distribution with its maximum likelihood estimator. The estimator of the second estimation policy is derived in such a way as to ensure that the requested critical fractile is attained. For the third estimation policy, the corresponding estimator is obtained maximizing the a-priori expected profit with respect to a constant which has been included into the form of the estimator. Three statistical measures have been chosen to perform the evaluation. The actual critical fractile attained by each estimator, the mean square error, and the range of deviation of estimates from the optimal order quantity, when the probability to take such a range is the same for the three estimation policies. The behavior of the three statistical measures is explored under different combinations of sample sizes and critical fractiles. With small sample sizes, no estimation policy predominates over the others. The estimator which attains the closest actual critical fractile to the requested one, this estimator has the largest mean square and the largest range of deviation of estimates from the optimal order quantity. On the contrary, with samples over 40 observations, the choice is restricted among the estimators of the first and third estimation policy. To facilitate this choice, at different sample sizes, we offer the required values of the critical fractile which determine which estimation policy eventually should be applied
    corecore