8,238 research outputs found

    Confidence Intervals for Data-Driven Inventory Policies with Demand Censoring

    Get PDF
    We revisit the classical dynamic inventory management problem of Scarf (1959b) from the perspective of a decision-maker who has n historical selling seasons of data and must make ordering decisions for the upcoming season. We develop a nonparametric estimation procedure for the (*S; s*) policy that is consistent, then characterize the finite-sample properties of the estimated (*S; s*) levels by deriving their asymptotic confidence intervals. We also consider having at least some of the past selling seasons of data censored from the absence of backlogging, and show that the intuitive procedure of first correcting for censoring in the demand data yields inconsistent estimates. We then show how to correctly use the censored data to obtain consistent estimates and derive asymptotic confidence intervals for this policy using Stein’s method. We further show the confidence intervals can be used to effectively bound the difference between the expected total cost of an estimated policy and that of the optimal policy. We validate our results with extensive computations on simulated data. Our results extend to the repeated newsvendor problem and the base-stock policy problem by appropriate parameter choices

    The Big Data Newsvendor: Practical Insights from Machine Learning

    Get PDF
    We investigate the data-driven newsvendor problem when one has n observations of p features related to the demand as well as historical demand data. Rather than a two-step process of first estimating a demand distribution then optimizing for the optimal order quantity, we propose solving the “Big Data” newsvendor problem via single step machine learning algorithms. Specifically, we propose algorithms based on the Empirical Risk Minimization (ERM) principle, with and without regularization, and an algorithm based on Kernel-weights Optimization (KO). The ERM approaches, equivalent to high-dimensional quantile regression, can be solved by convex optimization problems and the KO approach by a sorting algorithm. We analytically justify the use of features by showing that their omission yields inconsistent decisions. We then derive finite-sample performance bounds on the out-of-sample costs of the feature-based algorithms, which quantify the effects of dimensionality and cost parameters. Our bounds, based on algorithmic stability theory, generalize known analyses for the newsvendor problem without feature information. Finally, we apply the feature-based algorithms for nurse staffing in a hospital emergency room using a data set from a large UK teaching hospital and find that (i) the best ERM and KO algorithms beat the best practice benchmark by 23% and 24% respectively in the out-of-sample cost, and (ii) the best KO algorithm is faster than the best ERM algorithm by three orders of magnitude and the best practice benchmark by two orders of magnitude

    Data taking strategy for the phase study in ψK+K\psi^{\prime} \to K^+K^-

    Full text link
    The study of the relative phase between strong and electromagnetic amplitudes is of great importance for understanding the dynamics of charmonium decays. The information of the phase can be obtained model-independently by fitting the scan data of some special decay channels, one of which is ψK+K\psi^{\prime} \to K^{+}K^{-}. To find out the optimal data taking strategy for a scan experiment in the measurement of the phase in ψK+K\psi^{\prime} \to K^{+} K^{-}, the minimization process is analyzed from a theoretical point of view. The result indicates that for one parameter fit, only one data taking point in the vicinity of a resonance peak is sufficient to acquire the optimal precision. Numerical results are obtained by fitting simulated scan data. Besides the results related to the relative phase between strong and electromagnetic amplitudes, the method is extended to analyze the fits of other resonant parameters, such as the mass and the total decay width of ψ\psi^{\prime}.Comment: 13 pages, 7 figure

    Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method

    Get PDF
    Problem definition: We study the practice-motivated problem of dynamically procuring a new, short life-cycle product under demand uncertainty. The firm does not know the demand for the new product but has data on similar products sold in the past, including demand histories and covariate information such as product characteristics. Academic/practical relevance: The dynamic procurement problem has long attracted academic and practitioner interest, and we solve it in an innovative data-driven way with proven theoretical guarantees. This work is also the first to leverage the power of covariate data in solving this problem. Methodology:We propose a new, combined forecasting and optimization algorithm called the Residual Tree method, and analyze its performance via epi-convergence theory and computations. Our method generalizes the classical Scenario Tree method by using covariates to link historical data on similar products to construct demand forecasts for the new product. Results: We prove, under fairly mild conditions, that the Residual Tree method is asymptotically optimal as the size of the data set grows. We also numerically validate the method for problem instances derived using data from the global fashion retailer Zara. We find that ignoring covariate information leads to systematic bias in the optimal solution, translating to a 6–15% increase in the total cost for the problem instances under study. We also find that solutions based on trees using just 2–3 branches per node, which is common in the existing literature, are inadequate, resulting in 30–66% higher total costs compared with our best solution. Managerial implications: The Residual Tree is a new and generalizable approach that uses past data on similar products to manage new product inventories. We also quantify the value of covariate information and of granular demand modeling

    Distinguishing mixed quantum states: Minimum-error discrimination versus optimum unambiguous discrimination

    Full text link
    We consider two different optimized measurement strategies for the discrimination of nonorthogonal quantum states. The first is conclusive discrimination with a minimum probability of inferring an erroneous result, and the second is unambiguous, i. e. error-free, discrimination with a minimum probability of getting an inconclusive outcome, where the measurement fails to give a definite answer. For distinguishing between two mixed quantum states, we investigate the relation between the minimum error probability achievable in conclusive discrimination, and the minimum failure probability that can be reached in unambiguous discrimination of the same two states. The latter turns out to be at least twice as large as the former for any two given states. As an example, we treat the case that the state of the quantum system is known to be, with arbitrary prior probability, either a given pure state, or a uniform statistical mixture of any number of mutually orthogonal states. For this case we derive an analytical result for the minimum probability of error and perform a quantitative comparison to the minimum failure probability.Comment: Replaced by final version, accepted for publication in Phys. Rev. A. Revtex4, 6 pages, 3 figure

    Stoner gap in the superconducting ferromagnet UGe2

    Full text link
    We report the temperature (TT) dependence of ferromagnetic Bragg peak intensities and dc magnetization of the superconducting ferromagnet UGe2 under pressure (PP). We have found that the low-TT behavior of the uniform magnetization can be explained by a conventional Stoner model. A functional analysis of the data produces the following results: The ferromagnetic state below a critical pressure can be understood as the perfectly polarized state, in which heavy quasiparticles occupy only majority spin bands. A Stoner gap Δ(P)\Delta(P) decreases monotonically with increasing pressure and increases linearly with magnetic field. We show that the present analysis based on the Stoner model is justified by a consistency check, i.e., comparison of density of states at the Fermi energy deduced from the analysis with observed electronic specific heat coeffieients. We also argue the influence of the ferromagnetism on the superconductivity.Comment: 5 pages, 4 figures. to be published in Phys. Rev.

    Optimal quantum detectors for unambiguous detection of mixed states

    Get PDF
    We consider the problem of designing an optimal quantum detector that distinguishes unambiguously between a collection of mixed quantum states. Using arguments of duality in vector space optimization, we derive necessary and sufficient conditions for an optimal measurement that maximizes the probability of correct detection. We show that the previous optimal measurements that were derived for certain special cases satisfy these optimality conditions. We then consider state sets with strong symmetry properties, and show that the optimal measurement operators for distinguishing between these states share the same symmetries, and can be computed very efficiently by solving a reduced size semidefinite program.Comment: Submitted to Phys. Rev.

    Machine Learning and Portfolio Optimization

    Get PDF
    The portfolio optimization model has limited impact in practice due to estimation issues when applied with real data. To address this, we adapt two machine learning methods, regularization and cross-validation, for portfolio optimization. First, we introduce performance-based regularization (PBR), where the idea is to constrain the sample variances of the estimated portfolio risk and return, which steers the solution towards one associated with less estimation error in the performance. We consider PBR for both mean-variance and mean-CVaR problems. For the mean-variance problem, PBR introduces a quartic polynomial constraint, for which we make two convex approximations: one based on rank-1 approximation and another based on a convex quadratic approximation. The rank-1 approximation PBR adds a bias to the optimal allocation, and the convex quadratic approximation PBR shrinks the sample covariance matrix. For the mean-CVaR problem, the PBR model is a combinatorial optimization problem, but we prove its convex relaxation, a QCQP, is essentially tight. We show that the PBR models can be cast as robust optimization problems with novel uncertainty sets and establish asymptotic optimality of both Sample Average Approximation (SAA) and PBR solutions and the corresponding efficient frontiers. To calibrate the right hand sides of the PBR constraints, we develop new, performance-based k-fold cross-validation algorithms. Using these algorithms, we carry out an extensive empirical investigation of PBR against SAA, as well as L1 and L2 regularizations and the equally-weighted portfolio. We find that PBR dominates all other benchmarks for two out of three of Fama-French data sets
    corecore