483 research outputs found

    Markov decision processes with uncertain parameters

    Get PDF
    Markov decision processes model stochastic uncertainty in systems and allow one to construct strategies which optimize the behaviour of a system with respect to some reward function. However, the parameters for this uncertainty, that is, the probabilities inside a Markov decision model, are derived from empirical or expert knowledge and are themselves subject to uncertainties such as measurement errors or limited expertise. This work considers second-order uncertainty models for Markov decision processes and derives theoretical and practical results. Among other models, this work considers two main forms of uncertainty. One form is a set of discrete scenarios with a prior probability distribution and the task to maximize the expected reward under the given probability distribution. Another form of uncertainty is a continuous uncertainty set of scenarios and the task to compute a policy that optimizes the rewards in the optimistic and pessimistic cases. The work provides two kinds of results. First, we establish complexity-theoretic hardness results for the considered optimization problems. Second, we design heuristics for some of the problems and evaluate them empirically. In the first class of results, we show that additional model uncertainty makes the optimization problems harder to solve, as they add an additional party with own optimization goals. In the second class of results, we show that even if the discussed problems are hard to solve in theory, we can come up with efficient heuristics that can solve them adequately well for practical applications

    Optimal Inspection and Maintenance Planning for Deteriorating Structural Components through Dynamic Bayesian Networks and Markov Decision Processes

    Full text link
    Civil and maritime engineering systems, among others, from bridges to offshore platforms and wind turbines, must be efficiently managed as they are exposed to deterioration mechanisms throughout their operational life, such as fatigue or corrosion. Identifying optimal inspection and maintenance policies demands the solution of a complex sequential decision-making problem under uncertainty, with the main objective of efficiently controlling the risk associated with structural failures. Addressing this complexity, risk-based inspection planning methodologies, supported often by dynamic Bayesian networks, evaluate a set of pre-defined heuristic decision rules to reasonably simplify the decision problem. However, the resulting policies may be compromised by the limited space considered in the definition of the decision rules. Avoiding this limitation, Partially Observable Markov Decision Processes (POMDPs) provide a principled mathematical methodology for stochastic optimal control under uncertain action outcomes and observations, in which the optimal actions are prescribed as a function of the entire, dynamically updated, state probability distribution. In this paper, we combine dynamic Bayesian networks with POMDPs in a joint framework for optimal inspection and maintenance planning, and we provide the formulation for developing both infinite and finite horizon POMDPs in a structural reliability context. The proposed methodology is implemented and tested for the case of a structural component subject to fatigue deterioration, demonstrating the capability of state-of-the-art point-based POMDP solvers for solving the underlying planning optimization problem. Within the numerical experiments, POMDP and heuristic-based policies are thoroughly compared, and results showcase that POMDPs achieve substantially lower costs as compared to their counterparts, even for traditional problem settings

    RISK-AVERSE AND AMBIGUITY-AVERSE MARKOV DECISION PROCESSES

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures

    Get PDF
    The scientific domain of this thesis is optimization under uncertainty for discrete event stochastic systems. In particular, this thesis focuses on the practical implementation of the Dynamic Programming (DP) methodology to discrete event stochastic systems. Unfortunately DP in its crude form suffers from three severe computational obstacles that make its imple-mentation to such systems an impossible task. This thesis addresses these obstacles by developing and executing practical Approximate Dynamic Programming (ADP) techniques. Specifically, for the purposes of this thesis we developed the following ADP techniques. The first one is inspired from the Reinforcement Learning (RL) literature and is termed as Real Time Approximate Dynamic Programming (RTADP). The RTADP algorithm is meant for active learning while operating the stochastic system. The basic idea is that the agent while constantly interacts with the uncertain environment accumulates experience, which enables him to react more optimal in future similar situations. While the second one is an off-line ADP procedure These ADP techniques are demonstrated on a variety of discrete event stochastic systems such as: i) a three stage queuing manufacturing network with recycle, ii) a supply chain of the light aromatics of a typical refinery, iii) several stochastic shortest path instances with a single starting and terminal state and iv) a general project portfolio management problem. Moreover, this work addresses, in a systematic way, the issue of multistage risk within the DP framework by exploring the usage of intra-period and inter-period risk sensitive utility functions. In this thesis we propose a special structure for an intra-period utility and compare the derived policies in several multistage instances.Ph.D.Committee Chair: Jay H. Lee; Committee Member: Martha Grover; Committee Member: Matthew J. Realff; Committee Member: Shabbir Ahmed; Committee Member: Stylianos Kavadia
    corecore