1,527 research outputs found

    Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints

    Full text link
    This paper presents a stochastic model predictive control approach for nonlinear systems subject to time-invariant probabilistic uncertainties in model parameters and initial conditions. The stochastic optimal control problem entails a cost function in terms of expected values and higher moments of the states, and chance constraints that ensure probabilistic constraint satisfaction. The generalized polynomial chaos framework is used to propagate the time-invariant stochastic uncertainties through the nonlinear system dynamics, and to efficiently sample from the probability densities of the states to approximate the satisfaction probability of the chance constraints. To increase computational efficiency by avoiding excessive sampling, a statistical analysis is proposed to systematically determine a-priori the least conservative constraint tightening required at a given sample size to guarantee a desired feasibility probability of the sample-approximated chance constraint optimization problem. In addition, a method is presented for sample-based approximation of the analytic gradients of the chance constraints, which increases the optimization efficiency significantly. The proposed stochastic nonlinear model predictive control approach is applicable to a broad class of nonlinear systems with the sufficient condition that each term is analytic with respect to the states, and separable with respect to the inputs, states and parameters. The closed-loop performance of the proposed approach is evaluated using the Williams-Otto reactor with seven states, and ten uncertain parameters and initial conditions. The results demonstrate the efficiency of the approach for real-time stochastic model predictive control and its capability to systematically account for probabilistic uncertainties in contrast to a nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro

    Stochastic graph partitioning: quadratic versus SOCP formulations

    Get PDF
    International audienceWe consider a variant of the graph partitioning problem involving knapsack constraints with Gaussian random coefficients. In this new variant, under this assumption of probability distribution, the problem can be traditionally formulated as a binary SOCP for which the continuous relaxation is convex. In this paper, we reformulate the problem as a binary quadratic constrained program for which the continuous relaxation is not necessarily convex. We propose several linearization techniques for latter: the classical linearization proposed by Fortet (Trabajos de Estadistica 11(2):111–118, 1960) and the linearization proposed by Sherali and Smith (Optim Lett 1(1):33–47, 2007). In addition to the basic implementation of the latter, we propose an improvement which includes, in the computation, constraints coming from the SOCP formulation. Numerical results show that an improvement of Sherali–Smith’s linearization outperforms largely the binary SOCP program and the classical linearization when investigated in a branch-and-bound approach

    OPERATIONAL RELIABILITY AND RISK EVALUATION FRAMEWORKS FOR SUSTAINABLE ELECTRIC POWER SYSTEMS

    Get PDF
    Driven by a confluence of multiple environmental, social, technical, and economic factors, traditional electric power systems are undergoing a momentous transition toward sustainable electric power systems. One of the important facets of this transformation is the inclusion of high penetration of variable renewable energy sources, the chief among them being wind power. The new source of uncertainty that stems from imperfect wind power forecasts, coupled with the traditional uncertainties in electric power systems, such as unplanned component outages, introduces new challenges for power system operators. In particular, the short-term or operational reliability of sustainable electric power systems could be at increased risk as limited remedial resources are available to the operators to handle uncertainties and outages during system operation. Furthermore, as sustainable electric power systems and natural gas networks become increasingly coupled, the impacts of outages in one network can quickly propagate into the other, thereby reducing the operational reliability of integrated electric power-gas networks (IEPGNs). In light of the above discussion, a successful transition to sustainable electric power systems necessitates a new set of tools to assist the power system operators to make risk-informed decisions amid multiple sources of uncertainties. Such tools should be able to realistically evaluate the hour- and day-ahead operational reliability and risk indices of sustainable electric power systems in a computationally efficient manner while giving full attention to the uncertainties of wind power and IEGPNs. To this end, the research is conducted on five related topics. First, a simulation-based framework is proposed to evaluate the operational reliability indices of generating systems using the fixed-effort generalized splitting approach. Simulations show improvement in computational performance when compared to the traditional Monte-Carlo simulation (MCS). Second, a hybrid analytical-simulation framework is proposed for the short-term risk assessment of wind-integrated power systems. The area risk method – an analytical technique, is combined with the importance sampling (IS)-based MCS to integrate the proposed reliability models of wind speed and calculate the risk indices with a low computational burden. Case studies validate the efficacy of the proposed framework. Third, the importance sampling-based MCS framework is extended to include the proposed data-driven probabilistic models of wind power to avoid the drawbacks of wind speed models. Fourth, a comprehensive framework for the operational reliability evaluation of IEPGNs is developed. This framework includes new reliability models for natural gas pipelines and natural gas-fired generators with dual fuel capabilities. Simulations show the importance of considering the coupling between the two networks while evaluating operational reliability indices. Finally, a new chance-constrained optimization model to consider the operational reliability constraints while determining the optimal operational schedule for microgrids is proposed. Case studies show the tradeoff between the reliability and the operating costs when scheduling the microgrids

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences

    The stochastic vehicle routing problem : a literature review, part I : models

    Get PDF
    Building on the work of Gendreau et al. (Eur J Oper Res 88(1):3–12; 1996), we review the past 20 years of scientific literature on stochastic vehicle routing problems. The numerous variants of the problem that have been studied in the literature are described and categorized. Keywords: vehicle routing (VRP), stochastic programming, SVRPpublishedVersio

    Towards Thompson Sampling for Complex Bayesian Reasoning

    Get PDF
    Paper III, IV, and VI are not available as a part of the dissertation due to the copyright.Thompson Sampling (TS) is a state-of-art algorithm for bandit problems set in a Bayesian framework. Both the theoretical foundation and the empirical efficiency of TS is wellexplored for plain bandit problems. However, the Bayesian underpinning of TS means that TS could potentially be applied to other, more complex, problems as well, beyond the bandit problem, if suitable Bayesian structures can be found. The objective of this thesis is the development and analysis of TS-based schemes for more complex optimization problems, founded on Bayesian reasoning. We address several complex optimization problems where the previous state-of-art relies on a relatively myopic perspective on the problem. These includes stochastic searching on the line, the Goore game, the knapsack problem, travel time estimation, and equipartitioning. Instead of employing Bayesian reasoning to obtain a solution, they rely on carefully engineered rules. In all brevity, we recast each of these optimization problems in a Bayesian framework, introducing dedicated TS based solution schemes. For all of the addressed problems, the results show that besides being more effective, the TS based approaches we introduce are also capable of solving more adverse versions of the problems, such as dealing with stochastic liars.publishedVersio

    New Routing Problems with possibly correlated travel times

    Get PDF
    In the literature of operational research, Vehicle Routing Problems (VRP) were and still are subject of countless studies. Under the scope of combinatorial optimization, this thesis analyses some variants of VRP both with deterministic and uncertain travel times. The deterministic problem under study is a drayage problem with characteristics con- cerning service types and requirement seldom investigated all together. The formulations proposed to model this problem are: the node-arc formulation and the Set Partitioning formu- lation. Concerning the solution methods, two heuristics and a branch-and-price approach are presented. The section dealing with uncertain and correlated travel times faces two classes of VRP with time windows using either single or joint chance constraints depending on whether missing a customers time window makes the entire route infeasible or not. From a comparison between deterministic and stochastic methods, these last represent a profitable investment to guarantee the feasibility of the solution in realistic instances

    Community detection and stochastic block models: recent developments

    Full text link
    The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed

    On partitioning multivariate self-affine time series

    Get PDF
    Given a multivariate time series, possibly of high dimension, with unknown and time-varying joint distribution, it is of interest to be able to completely partition the time series into disjoint, contiguous subseries, each of which has different distributional or pattern attributes from the preceding and succeeding subseries. An additional feature of many time series is that they display self-affinity, so that subseries at one time scale are similar to subseries at another after application of an affine transformation. Such qualities are observed in time series from many disciplines, including biology, medicine, economics, finance, and computer science. This paper defines the relevant multiobjective combinatorial optimization problem with limited assumptions as a biobjective one, and a specialized evolutionary algorithm is presented which finds optimal self-affine time series partitionings with a minimum of choice parameters. The algorithm not only finds partitionings for all possible numbers of partitions given data constraints, but also for self-affinities between these partitionings and some fine-grained partitioning. The resulting set of Pareto-efficient solution sets provides a rich representation of the self-affine properties of a multivariate time series at different locations and time scales
    • …
    corecore