29,944 research outputs found

    Validating Sample Average Approximation Solutions with Negatively Dependent Batches

    Full text link
    Sample-average approximations (SAA) are a practical means of finding approximate solutions of stochastic programming problems involving an extremely large (or infinite) number of scenarios. SAA can also be used to find estimates of a lower bound on the optimal objective value of the true problem which, when coupled with an upper bound, provides confidence intervals for the true optimal objective value and valuable information about the quality of the approximate solutions. Specifically, the lower bound can be estimated by solving multiple SAA problems (each obtained using a particular sampling method) and averaging the obtained objective values. State-of-the-art methods for lower-bound estimation generate batches of scenarios for the SAA problems independently. In this paper, we describe sampling methods that produce negatively dependent batches, thus reducing the variance of the sample-averaged lower bound estimator and increasing its usefulness in defining a confidence interval for the optimal objective value. We provide conditions under which the new sampling methods can reduce the variance of the lower bound estimator, and present computational results to verify that our scheme can reduce the variance significantly, by comparison with the traditional Latin hypercube approach

    Multistage Stochastic Portfolio Optimisation in Deregulated Electricity Markets Using Linear Decision Rules

    Get PDF
    The deregulation of electricity markets increases the financial risk faced by retailers who procure electric energy on the spot market to meet their customers’ electricity demand. To hedge against this exposure, retailers often hold a portfolio of electricity derivative contracts. In this paper, we propose a multistage stochastic mean-variance optimisation model for the management of such a portfolio. To reduce computational complexity, we perform two approximations: stage-aggregation and linear decision rules (LDR). The LDR approach consists of restricting the set of decision rules to those affine in the history of the random parameters. When applied to mean-variance optimisation models, it leads to convex quadratic programs. Since their size grows typically only polynomially with the number of periods, they can be efficiently solved. Our numerical experiments illustrate the value of adaptivity inherent in the LDR method and its potential for enabling scalability to problems with many periods.OR in energy, electricity portfolio management, stochastic programming, risk management, linear decision rules

    Curriculum Guidelines for Undergraduate Programs in Data Science

    Get PDF
    The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science

    Estimation of multi-state life table functions and their variability from complex survey data using the SPACE Program

    Get PDF
    The multistate life table (MSLT) model is an important demographic method to document life cycle processes. In this study, we present the SPACE (Stochastic Population Analysis for Complex Events) program to estimate MSLT functions and their sampling variability. It has several advantages over other programs, including the use of microsimulation and the bootstrap method to estimate the sampling variability. Simulation enables researchers to analyze a broader array of statistics than the deterministic approach, and may be especially advantageous in investigating distributions of MSLT functions. The bootstrap method takes sample design into account to correct the potential bias in variance estimates.bootstrap, health expectancy, multi-state life table, population aging

    Chance-Constrained Outage Scheduling using a Machine Learning Proxy

    Full text link
    Outage scheduling aims at defining, over a horizon of several months to years, when different components needing maintenance should be taken out of operation. Its objective is to minimize operation-cost expectation while satisfying reliability-related constraints. We propose a distributed scenario-based chance-constrained optimization formulation for this problem. To tackle tractability issues arising in large networks, we use machine learning to build a proxy for predicting outcomes of power system operation processes in this context. On the IEEE-RTS79 and IEEE-RTS96 networks, our solution obtains cheaper and more reliable plans than other candidates

    Model-based machine learning to identify clinical relevance in a high-resolution simulation of sepsis and trauma

    Get PDF
    Introduction: Sepsis is a devastating, costly, and complicated disease. It represents the summation of varied host immune responses in a clinical and physiological diagnosis. Despite extensive research, there is no current mediator-directed therapy, nor a biomarker panel able to categorize disease severity or reliably predict outcome. Although still distant from direct clinical translation, dynamic computational and mathematical models of acute systemic inflammation and sepsis are being developed. Although computationally intensive to run and calibrate, agent-based models (ABMs) are one type of model well suited for this. New analytical methods to efficiently extract knowledge from ABMs are needed. Specifically, machine-learning techniques are a promising option to augment the model development process such that parameterization and calibration are performed intelligently and efficiently. Methods: We used the Keras framework to train an Artificial Neural Network (ANN) for the purpose of identifying critical biological tipping points at which an in silico patient would heal naturally or require intervention in the Innate Immune Response Agent-Based Model (IIRABM). This ANN, determines simulated patient “survival” from cytokine state based on their overall resilience and the pathogenicity of any active infections experienced by the patient, defined by microbial invasiveness, toxigenesis, and environmental toxicity. These tipping points were gathered from previously generated datasets of simulated sweeps of the 4 IIRABM initializing parameters. Results: Using mean squared error as our loss function, we report an accuracy of greater than 85% with inclusion of 20% of the training set. This accuracy was independently validated on withheld runs. We note that there is some amount of error that is inherent to this process as the determination of the tipping points is a computation which converges monotonically to the true value as a function of the number of stochastic replicates used to determine the point. Conclusion: Our method of regression of these critical points represents an alternative to traditional parameter-sweeping or sensitivity analysis techniques. Essentially, the ANN computes the boundaries of the clinically relevant space as a function of the model’s parameterization, eliminating the need for a brute-force exploration of model parameter space. In doing so, we demonstrate the successful development of this ANN which will allows for an efficient exploration of model parameter space

    Evaluating Pro-poor Transfers When Targeting is Weak: The Albanian Ndihma Ekonomike Program Revisited

    Get PDF
    The Albanian Ndihma Ekonomike is one of the first poverty reduction programs launched in transitional economies. Its record has been judged positively during the recession period of the 1990s and negatively during the more recent growth phase. This paper reconsiders the program using a regression-adjusted matching estimator rst suggested by Heckman et al. (1997, 1998) and exploiting discontinuities in program design and targeting failures. We nd the program to have a weak targeting capacity and a negative and signicant impact on welfare. We also nd that recent changes introduced to the program have not improved its performance. An analysis of the distributional impact of treatment based on stochastic dominance theory suggests that our results are robust.Social assistance, Poverty, Impact Evaluation, Albania
    corecore