8,438 research outputs found

    Confidence Intervals for Data-Driven Inventory Policies with Demand Censoring

    Get PDF
    We revisit the classical dynamic inventory management problem of Scarf (1959b) from the perspective of a decision-maker who has n historical selling seasons of data and must make ordering decisions for the upcoming season. We develop a nonparametric estimation procedure for the (*S; s*) policy that is consistent, then characterize the finite-sample properties of the estimated (*S; s*) levels by deriving their asymptotic confidence intervals. We also consider having at least some of the past selling seasons of data censored from the absence of backlogging, and show that the intuitive procedure of first correcting for censoring in the demand data yields inconsistent estimates. We then show how to correctly use the censored data to obtain consistent estimates and derive asymptotic confidence intervals for this policy using Stein’s method. We further show the confidence intervals can be used to effectively bound the difference between the expected total cost of an estimated policy and that of the optimal policy. We validate our results with extensive computations on simulated data. Our results extend to the repeated newsvendor problem and the base-stock policy problem by appropriate parameter choices

    The Big Data Newsvendor: Practical Insights from Machine Learning

    Get PDF
    We investigate the data-driven newsvendor problem when one has n observations of p features related to the demand as well as historical demand data. Rather than a two-step process of first estimating a demand distribution then optimizing for the optimal order quantity, we propose solving the “Big Data” newsvendor problem via single step machine learning algorithms. Specifically, we propose algorithms based on the Empirical Risk Minimization (ERM) principle, with and without regularization, and an algorithm based on Kernel-weights Optimization (KO). The ERM approaches, equivalent to high-dimensional quantile regression, can be solved by convex optimization problems and the KO approach by a sorting algorithm. We analytically justify the use of features by showing that their omission yields inconsistent decisions. We then derive finite-sample performance bounds on the out-of-sample costs of the feature-based algorithms, which quantify the effects of dimensionality and cost parameters. Our bounds, based on algorithmic stability theory, generalize known analyses for the newsvendor problem without feature information. Finally, we apply the feature-based algorithms for nurse staffing in a hospital emergency room using a data set from a large UK teaching hospital and find that (i) the best ERM and KO algorithms beat the best practice benchmark by 23% and 24% respectively in the out-of-sample cost, and (ii) the best KO algorithm is faster than the best ERM algorithm by three orders of magnitude and the best practice benchmark by two orders of magnitude

    Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method

    Get PDF
    Problem definition: We study the practice-motivated problem of dynamically procuring a new, short life-cycle product under demand uncertainty. The firm does not know the demand for the new product but has data on similar products sold in the past, including demand histories and covariate information such as product characteristics. Academic/practical relevance: The dynamic procurement problem has long attracted academic and practitioner interest, and we solve it in an innovative data-driven way with proven theoretical guarantees. This work is also the first to leverage the power of covariate data in solving this problem. Methodology:We propose a new, combined forecasting and optimization algorithm called the Residual Tree method, and analyze its performance via epi-convergence theory and computations. Our method generalizes the classical Scenario Tree method by using covariates to link historical data on similar products to construct demand forecasts for the new product. Results: We prove, under fairly mild conditions, that the Residual Tree method is asymptotically optimal as the size of the data set grows. We also numerically validate the method for problem instances derived using data from the global fashion retailer Zara. We find that ignoring covariate information leads to systematic bias in the optimal solution, translating to a 6–15% increase in the total cost for the problem instances under study. We also find that solutions based on trees using just 2–3 branches per node, which is common in the existing literature, are inadequate, resulting in 30–66% higher total costs compared with our best solution. Managerial implications: The Residual Tree is a new and generalizable approach that uses past data on similar products to manage new product inventories. We also quantify the value of covariate information and of granular demand modeling

    Stoner gap in the superconducting ferromagnet UGe2

    Full text link
    We report the temperature (TT) dependence of ferromagnetic Bragg peak intensities and dc magnetization of the superconducting ferromagnet UGe2 under pressure (PP). We have found that the low-TT behavior of the uniform magnetization can be explained by a conventional Stoner model. A functional analysis of the data produces the following results: The ferromagnetic state below a critical pressure can be understood as the perfectly polarized state, in which heavy quasiparticles occupy only majority spin bands. A Stoner gap Δ(P)\Delta(P) decreases monotonically with increasing pressure and increases linearly with magnetic field. We show that the present analysis based on the Stoner model is justified by a consistency check, i.e., comparison of density of states at the Fermi energy deduced from the analysis with observed electronic specific heat coeffieients. We also argue the influence of the ferromagnetism on the superconductivity.Comment: 5 pages, 4 figures. to be published in Phys. Rev.

    Machine Learning and Portfolio Optimization

    Get PDF
    The portfolio optimization model has limited impact in practice due to estimation issues when applied with real data. To address this, we adapt two machine learning methods, regularization and cross-validation, for portfolio optimization. First, we introduce performance-based regularization (PBR), where the idea is to constrain the sample variances of the estimated portfolio risk and return, which steers the solution towards one associated with less estimation error in the performance. We consider PBR for both mean-variance and mean-CVaR problems. For the mean-variance problem, PBR introduces a quartic polynomial constraint, for which we make two convex approximations: one based on rank-1 approximation and another based on a convex quadratic approximation. The rank-1 approximation PBR adds a bias to the optimal allocation, and the convex quadratic approximation PBR shrinks the sample covariance matrix. For the mean-CVaR problem, the PBR model is a combinatorial optimization problem, but we prove its convex relaxation, a QCQP, is essentially tight. We show that the PBR models can be cast as robust optimization problems with novel uncertainty sets and establish asymptotic optimality of both Sample Average Approximation (SAA) and PBR solutions and the corresponding efficient frontiers. To calibrate the right hand sides of the PBR constraints, we develop new, performance-based k-fold cross-validation algorithms. Using these algorithms, we carry out an extensive empirical investigation of PBR against SAA, as well as L1 and L2 regularizations and the equally-weighted portfolio. We find that PBR dominates all other benchmarks for two out of three of Fama-French data sets

    Properties of potential eco-friendly gas replacements for particle detectors in high-energy physics

    Get PDF
    Gas detectors for elementary particles require F-based gases for optimal performance. Recent regulations demand the use of environmentally unfriendly F-based gases to be limited or banned. This work studies properties of potential eco-friendly gas replacements by computing the physical and chemical parameters relevant for use as detector media, and suggests candidates to be considered for experimental investigation

    Properties of potential eco-friendly gas replacements for particle detectors in high-energy physics

    Full text link
    Modern gas detectors for detection of particles require F-based gases for optimal performance. Recent regulations demand the use of environmentally unfriendly F-based gases to be limited or banned. This review studies properties of potential eco-friendly gas candidate replacements.Comment: 38 pages, 9 figures, 8 tables. To be submitted to Journal of Instrumentatio

    Candidate eco-friendly gas mixtures for MPGDs

    Get PDF
    Modern gas detectors for detection of particles require F-based gases for optimal performance.Recent regulations demand the use of environmentally unfriendly F-based gases t o be limited or banned. This review studies properties of potential eco-friendly gas candidate replacements

    Information loss in local dissipation environments

    Full text link
    The sensitivity of entanglement to the thermal and squeezed reservoirs' parameters is investigated regarding entanglement decay and what is called sudden-death of entanglement, ESD, for a system of two qubit pairs. The dynamics of information is investigated by means of the information disturbance and exchange information. We show that for squeezed reservoir, we can keep both of the entanglement and information survival for a long time. The sudden death of information is seen in the case of thermal reservoir

    Measurements of branching fractions for inclusive K0~/K0 and K*(892)+- decays of neutral and charged D mesons

    Get PDF
    Using the data sample of about 33 pb-1 collected at and around 3.773 GeV with the BES-II detector at the BEPC collider, we have studied inclusive K0~/K0 and K*(892)+- decays of D0 and D+ mesons. The branching fractions for the inclusive K0~/K0 and K*(892)- decays are measured to be BF(D0 to K0~/K0 X)=(47.6+-4.8+-3.0)%, BF(D+ to K0~/K0 X)=(60.5+-5.5+-3.3)%, BF(D0 to K*- X)=(15.3+- 8.3+- 1.9)% and BF(D+ to K*- X)=(5.7+- 5.2+- 0.7)%. The upper limits of the branching fractions for the inclusive K*(892)+ decays are set to be BF(D0 to K*+ X)<3.6% and BF(D+ to K*+ X) <20.3% at 90% confidence level
    • …
    corecore