28,039 research outputs found
Predictive maintenance for the heated hold-up tank
We present a numerical method to compute an optimal maintenance date for the
test case of the heated hold-up tank. The system consists of a tank containing
a fluid whose level is controlled by three components: two inlet pumps and one
outlet valve. A thermal power source heats up the fluid. The failure rates of
the components depends on the temperature, the position of the three components
monitors the liquid level in the tank and the liquid level determines the
temperature. Therefore, this system can be modeled by a hybrid process where
the discrete (components) and continuous (level, temperature) parts interact in
a closed loop. We model the system by a piecewise deterministic Markov process,
propose and implement a numerical method to compute the optimal maintenance
date to repair the components before the total failure of the system.Comment: arXiv admin note: text overlap with arXiv:1101.174
Adaptive control in rollforward recovery for extreme scale multigrid
With the increasing number of compute components, failures in future
exa-scale computer systems are expected to become more frequent. This motivates
the study of novel resilience techniques. Here, we extend a recently proposed
algorithm-based recovery method for multigrid iterations by introducing an
adaptive control. After a fault, the healthy part of the system continues the
iterative solution process, while the solution in the faulty domain is
re-constructed by an asynchronous on-line recovery. The computations in both
the faulty and healthy subdomains must be coordinated in a sensitive way, in
particular, both under and over-solving must be avoided. Both of these waste
computational resources and will therefore increase the overall
time-to-solution. To control the local recovery and guarantee an optimal
re-coupling, we introduce a stopping criterion based on a mathematical error
estimator. It involves hierarchical weighted sums of residuals within the
context of uniformly refined meshes and is well-suited in the context of
parallel high-performance computing. The re-coupling process is steered by
local contributions of the error estimator. We propose and compare two criteria
which differ in their weights. Failure scenarios when solving up to
unknowns on more than 245\,766 parallel processes will be
reported on a state-of-the-art peta-scale supercomputer demonstrating the
robustness of the method
Different Approaches on Stochastic Reachability as an Optimal Stopping Problem
Reachability analysis is the core of model checking of time systems. For
stochastic hybrid systems, this safety verification method is very little supported mainly
because of complexity and difficulty of the associated mathematical problems. In this
paper, we develop two main directions of studying stochastic reachability as an optimal
stopping problem. The first approach studies the hypotheses for the dynamic programming
corresponding with the optimal stopping problem for stochastic hybrid systems.
In the second approach, we investigate the reachability problem considering approximations
of stochastic hybrid systems. The main difficulty arises when we have to prove the
convergence of the value functions of the approximating processes to the value function
of the initial process. An original proof is provided
The Stochastic Reach-Avoid Problem and Set Characterization for Diffusions
In this article we approach a class of stochastic reachability problems with
state constraints from an optimal control perspective. Preceding approaches to
solving these reachability problems are either confined to the deterministic
setting or address almost-sure stochastic requirements. In contrast, we propose
a methodology to tackle problems with less stringent requirements than almost
sure. To this end, we first establish a connection between two distinct
stochastic reach-avoid problems and three classes of stochastic optimal control
problems involving discontinuous payoff functions. Subsequently, we focus on
solutions of one of the classes of stochastic optimal control problems---the
exit-time problem, which solves both the two reach-avoid problems mentioned
above. We then derive a weak version of a dynamic programming principle (DPP)
for the corresponding value function; in this direction our contribution
compared to the existing literature is to develop techniques that admit
discontinuous payoff functions. Moreover, based on our DPP, we provide an
alternative characterization of the value function as a solution of a partial
differential equation in the sense of discontinuous viscosity solutions, along
with boundary conditions both in Dirichlet and viscosity senses. Theoretical
justifications are also discussed to pave the way for deployment of
off-the-shelf PDE solvers for numerical computations. Finally, we validate the
performance of the proposed framework on the stochastic Zermelo navigation
problem
Numerical method for impulse control of Piecewise Deterministic Markov Processes
This paper presents a numerical method to calculate the value function for a
general discounted impulse control problem for piecewise deterministic Markov
processes. Our approach is based on a quantization technique for the underlying
Markov chain defined by the post jump location and inter-arrival time.
Convergence results are obtained and more importantly we are able to give a
convergence rate of the algorithm. The paper is illustrated by a numerical
example.Comment: This work was supported by ARPEGE program of the French National
Agency of Research (ANR), project "FAUTOCOES", number ANR-09-SEGI-00
- …