4,399 research outputs found

    Continuous-observation partially observable semi-Markov decision processes for machine maintenance

    Get PDF
    Partially observable semi-Markov decision processes (POSMDPs) provide a rich framework for planning under both state transition uncertainty and observation uncertainty. In this paper, we widen the literature on POSMDP by studying discrete-state, discrete-action yet continuous-observation POSMDPs. We prove that the resultant α-vector set is continuous and therefore propose a point-based value iteration algorithm. This paper also bridges the gap between POSMDP and machine maintenance by incorporating various types of maintenance actions, such as actions changing machine state, actions changing degradation rate, and the temporally extended action "do nothing''. Both finite and infinite planning horizons are reviewed, and the solution methodology for each type of planning horizon is given. We illustrate the maintenance decision process via a real industrial problem and demonstrate that the developed framework can be readily applied to solve relevant maintenance problems

    Review of Markov models for maintenance optimization in the context of offshore wind

    Get PDF
    The offshore environment poses a number of challenges to wind farm operators. Harsher climatic conditions typically result in lower reliability while challenges in accessibility make maintenance difficult. One of the ways to improve availability is to optimize the Operation and Maintenance (O&M) actions such as scheduled, corrective and proactive maintenance. Many authors have attempted to model or optimize O&M through the use of Markov models. Two examples of Markov models, Hidden Markov Models (HMMs) and Partially Observable Markov Decision Processes (POMDPs) are investigated in this paper. In general, Markov models are a powerful statistical tool, which has been successfully applied for component diagnostics, prognostics and maintenance optimization across a range of industries. This paper discusses the suitability of these models to the offshore wind industry. Existing models which have been created for the wind industry are critically reviewed and discussed. As there is little evidence of widespread application of these models, this paper aims to highlight the key factors required for successful application of Markov models to practical problems. From this, the paper identifies the necessary theoretical and practical gaps that must be resolved in order to gain broad acceptance of Markov models to support O&M decision making in the offshore wind industry

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    Integrated optimization of maintenance interventions and spare part selection for a partially observable multi-component system

    Get PDF
    Advanced technical systems are typically composed of multiple critical components whose failure cause a system failure. Often, it is not technically or economically possible to install sensors dedicated to each component, which means that the exact condition of each component cannot be monitored, but a system level failure or defect can be observed. The service provider then needs to implement a condition based maintenance policy that is based on partial information on the systems condition. Furthermore, when the service provider decides to service the system, (s)he also needs to decide which spare part(s) to bring along in order to avoid emergency shipments and part returns. We model this problem as an infinite horizon partially observable Markov decision process. In a set of numerical experiments, we first compare the optimal policy with preventive and corrective maintenance policies: The optimal policy leads on average to a 28% and 15% cost decrease, respectively. Second, we investigate the value of having full information, i.e., sensors dedicated to each component: This leads on average to a 13% cost decrease compared to the case with partial information. Interestingly, having full information is more valuable for cheaper, less reliable components than for more expensive, more reliable components

    POMDPs in Continuous Time and Discrete Spaces

    Full text link
    Many processes, such as discrete event systems in engineering or population dynamics in biology, evolve in discrete space and continuous time. We consider the problem of optimal decision making in such discrete state and action space systems under partial observability. This places our work at the intersection of optimal filtering and optimal control. At the current state of research, a mathematical description for simultaneous decision making and filtering in continuous time with finite countable state and action spaces is still missing. In this paper, we give a mathematical description of a continuous-time POMDP. By leveraging optimal filtering theory we derive a HJB type equation that characterizes the optimal solution. Using techniques from deep learning we approximately solve the resulting partial integro-differential equation. We present (i) an approach solving the decision problem offline by learning an approximation of the value function and (ii) an online algorithm which provides a solution in belief space using deep reinforcement learning. We show the applicability on a set of toy examples which pave the way for future methods providing solutions for high dimensional problems.Comment: published at Conference on Neural Information Processing Systems (NeurIPS) 202

    Optimal Replacement Strategies for Wind Energy Systems

    Get PDF
    Motivated by rising energy prices, global climate change, escalating demand for electricity and global energy supply uncertainties, the U.S. government has established an ambitious goal of generating 80% of its electricity supply from clean, renewable sources by 2035. Wind energy is poised to play a prominent role in achieving this goal as it is estimated that 20% of the total domestic electricity supply can be reliably generated by land-based and offshore wind turbines by 2030. However, the cost of producing wind energy remains a significant barrier with operating and maintenance (O&M) costs contributing 20 to 47.5% of the total cost of energy. Given the urgent need for clean, renewable energy sources, and the widespread appeal of wind energy as a viable alternative, it is imperative to develop effective techniques to reduce the O&M costs of wind energy. This dissertation presents a framework within which real-time, condition-based data can be exploited to optimally time the replacement of critical wind turbine components. First, hybrid analytical-statistical tools are developed to estimate the current health of the component and approximate the expected time at which it will fail by observing a surrogate signal of degradation. The signal is assumed to evolve as a switching diffusion process, and its parameters are estimated via a novel Markov chain Monte Carlo procedure. Next, the problem of optimally replacing a critical component that resides in a partially-observable environment is addressed. Two models are formulated using a partially-observed Markov decision process (POMDP) framework. The first model ignores the cost of turbine downtime, while the second includes this cost explicitly. For both models, it is shown that a threshold replacement policy is optimal with respect to the cumulative level of component degradation. A third model is presented that considers cases in which the environment is partially observed and degradation measurements are uncertain. A threshold policy is shown to be optimal for a special case of this model. Several numerical examples will illustrate the main results and the value of including environmental observations in the wind energy setting
    corecore