37,922 research outputs found

    Multivariate bias reduction in capacity expansion planning

    Get PDF
    International audienceThe optimization of capacities in large scale power systems is a stochastic problem, because the need for storage and connections (i.e. exchange capacities) varies a lot from one week to another (e.g. power generation is subject to the vagaries of wind) and from one winter to another (e.g. water inflows due to snow melting). It is usually tackled through sample average approximation, i.e. assuming that the system which is optimal on average over the last 40 years (corrected for climate change) is also approximately optimal in general. However, in many cases, data are high-dimensional; the sample complexity, i.e. the amount of data necessary for a relevant optimization of capacities, increases linearly with the number of parameters and can be scarcely available at the relevant scale. This leads to an underestimation of capacities. We suggest the use of bias correction in capacity estimation. The present paper investigates the importance of the bias phenomenon, and the efficiency of bias correction tools (jackknife, bootstrap; combined with possibly penalized cross-validation) including new ones (dimension reduction tools, margin method

    Modelling network travel time reliability under stochastic demand

    Get PDF
    A technique is proposed for estimating the probability distribution of total network travel time, in the light of normal day-to-day variations in the travel demand matrix over a road traffic network. A solution method is proposed, based on a single run of a standard traffic assignment model, which operates in two stages. In stage one, moments of the total travel time distribution are computed by an analytic method, based on the multivariate moments of the link flow vector. In stage two, a flexible family of density functions is fitted to these moments. It is discussed how the resulting distribution may in practice be used to characterise unreliability. Illustrative numerical tests are reported on a simple network, where the method is seen to provide a means for identifying sensitive or vulnerable links, and for examining the impact on network reliability of changes to link capacities. Computational considerations for large networks, and directions for further research, are discussed

    Factors associated with prolonged length of stay following cardiac surgery in a major referral hospital in Oman: a retrospective observational study

    Get PDF
    Two objectives were set for this study. The first was to identify factors influencing prolonged postoperative length of stay (LOS) following cardiac surgery. The second was to devise a predictive model for prolonged LOS in the cardiac intensive care unit (CICU) based on preoperative factors available at admission and to compare it against two existing cardiac stratification systems.Observational retrospective study.A tertiary hospital in Oman.All adult patients who underwent cardiac surgery at a major referral hospital in Oman between 2009 and 2013.30.5% of the patients had prolonged LOS (≥11 days) after surgery, while 17% experienced prolonged ICU LOS (≥5 days). Factors that were identified to prolong CICU LOS were non-elective surgery, current congestive heart failure (CHF), renal failure, combined coronary artery bypass graft (CABG) and valve surgery, and other non-isolated valve or CABG surgery. Patients were divided into three groups based on their scores. The probabilities of prolonged CICU LOS were 11%, 26% and 28% for group 1, 2 and 3, respectively. The predictive model had an area under the curve of 0.75. Factors associated with prolonged overall postoperative LOS included the body mass index, the type of surgery, cardiopulmonary bypass machine use, packed red blood cells use, non-elective surgery and number of complications. The latter was the most important determinant of postoperative LOS.Patient management can be tailored for individual patient based on their treatments and personal attributes to optimise resource allocation. Moreover, a simple predictive score system to enable identification of patients at risk of prolonged CICU stay can be developed using data that are routinely collected by most hospitals

    Prognostic variables and scores identifying the last year of life in COPD: a systematic review protocol

    Get PDF
    Introduction People living with advanced chronic obstructive pulmonary disease (COPD) suffer from significant morbidity, reduced quality of life and high mortality, and are likely to benefit from many aspects of a palliative care approach. Prognostic estimates are a meaningful part of decision-making and better evidence for such estimates would facilitate advance care planning. We aim to provide quality evidence on known prognostic variables and scores which predict a prognosis in COPD of <12 months for use in the community. Methods and analysis We will conduct a systematic review of randomised or quasi-randomised controlled trials, prospective and retrospective longitudinal cohort and case–control studies on prognostic variables, multivariate scores or models for COPD. The search will cover the period up to April 2016. Study selection will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, with data extraction using fields from the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS) checklist for multivariate models, and study quality will be assessed using a modified version of the Quality In Prognosis Studies (QUIPS) tool. Ethics and dissemination The results will be disseminated through peer-reviewed publications and national and international conference presentations

    Can Micro Health Insurance Reduce Poverty? Evidence from Bangladesh

    Get PDF
    This paper examines the impact of micro health insurance on poverty reduction in rural areas of Bangladesh. The research is based on household level primary data collected from the operating areas of the Grameen Bank during 2006. A number of outcome measures relating to poverty status are considered; these include household income, stability of household income via food sufficiency and ownership of non-land assets, and also the probability of being above or below the poverty line. The results show that micro health insurance has a positive association with all of these indicators, and this is statistically significant and quantitatively important for food sufficiency

    Performance of Double k-class Estimators for Coefficients in Linear Regression Models with Non Spherical Disturbances under Asymmetric Losses

    Get PDF
    The risk of the family of feasible generalized double k-class estimators under LINEX loss function is derived in a linear regression model. The disturbances are assumed to be non-spherical and their variance covariance matrix is unknown

    Chance-Constrained Outage Scheduling using a Machine Learning Proxy

    Full text link
    Outage scheduling aims at defining, over a horizon of several months to years, when different components needing maintenance should be taken out of operation. Its objective is to minimize operation-cost expectation while satisfying reliability-related constraints. We propose a distributed scenario-based chance-constrained optimization formulation for this problem. To tackle tractability issues arising in large networks, we use machine learning to build a proxy for predicting outcomes of power system operation processes in this context. On the IEEE-RTS79 and IEEE-RTS96 networks, our solution obtains cheaper and more reliable plans than other candidates

    Modelling Inflation in EU Accession Countries: The Case of the Czech Republic, Hungary and Poland

    Get PDF
    Inflation in Central and East European countries varied considerably over the transition phase, and econometric relationships between prices, money, wages and exchange rates are said to have been unstable during this period. In order to shed some light on the issue, this paper analyses some empirical models of the inflation process in the three earliest east European transition economies: the Czech Republic, Hungary and Poland. Since the end of the 1980s these economies have experienced high rates of inflation, although significant disinflation measures were introduced during the mid-nineties to enhance these countries’ chances of joining the EU, and they succeeded in getting inflation under control without high costs in terms of lost output. Given this, the determinants of inflation need to be empirically analysed not only in order to understand the disinflation measures, but also to assess the possible effects of future pressure on prices. Price stabilisation is an essential complement to the success of transition. Policies to contain inflation are necessary for transition economies to grow and firms to restructure. In the present paper, we first look at inflation within the context of multivariate cointegration, where domestic and foreign price determinants are initially assessed in separate blocks (each single-theory based) in order to obtain a number of long-term attractors. We then formulate consumer and producer inflation equations from more general VEqCMs for each country. The importance of theory-based imbalances (from previous cointegration experiments) in explaining inflation can be assessed at this stage. Our most significant empirical findings seem to substantiate the idea that many, if not all, theoretical determinants of inflation are of importance in those countries in question: the exchange rate and the output gap would appear to be of particular importance in explaining the phenomenon.Inflation modelling, transition economies, European Union enlargement

    Importance sampling for stochastic programming

    Get PDF
    Stochastic programming models are large-scale optimization problems that are used to facilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the exptected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it requires the evaluation of a multidimensional integral whose integrand is an optimization problem. In turn, the recourse function has to be estimated using techniques such as scenario trees or Monte Carlo methods, both of which require numerous function evaluations to produce accurate results for large-scale problems with multiple periods and high-dimensional uncertainty. In this thesis, we introduce an Importance Sampling framework for stochastic programming that can produce accurate estimates of the recourse function using a small number of samples. Previous approaches for importance sampling in stochastic programming were limited to problems where the uncertainty was modelled using discrete random variables, and the recourse function was additively separable in the uncertain dimensions. Our framework avoids these restrictions by pairing Markov Chain Monte Carlo methods with Kernel Density Estimation algorithms to build a non-parametric Importance Sampling distribution, which can then be used to produce a low-variance estimate of the recourse function. We demonstrate the increased accuracy and efficiency of our approach using variants of well-known multistage stochastic programming problems. Our numerical results show that our framework produces more accurate estimates of the optimal value of stochastic programming models, especially for problems with moderate-to-high variance distributions or rare-event distributions. For example, in some applications, we found that if the random variables are drawn from a rare-event distribution, our proposed algorithm can achieve four times reduction in the mean square error and variance given by other existing methods (e.g.: SDDP with Crude Monte Carlo or SDDP with Quasi Monte Carlo method) for the same number of samples. Or when the random variables are drawn from the high variance distribution, our proposed algorithm can reduce the variance averagely by two times compared to the results obtained by other methods for approximately the same level of mean square error and a fixed number of samples. We use our proposed algorithm to solve a capacity expansion planning problem in the electric power industry. The model includes the unit commitment problem and maintenance scheduling. It allows the investors to make optimal decisions on the capacity and the type of generators to build in order to minimize the capital cost and operating cost over a long period of time. Our model computes the optimal schedule for each of the generators while meeting the demand and respecting the engineering constraints of each generator. We use an aggregation method to group generators of similar features, in order to reduce the problem size. The numerical experiment shows that by clustering the generators of the same technology with similar size together and apply the SDDP algorithm with our proposed sampling framework on this simplified formulation, we are able to solve the problem using only one fourth the amount of time to solve the original problem by conventional algorithms. The speed-up is achieved without a significant reduction in the quality of the solution.Open Acces
    • …
    corecore