223 research outputs found
Parameter-Independent Strategies for pMDPs via POMDPs
Markov Decision Processes (MDPs) are a popular class of models suitable for
solving control decision problems in probabilistic reactive systems. We
consider parametric MDPs (pMDPs) that include parameters in some of the
transition probabilities to account for stochastic uncertainties of the
environment such as noise or input disturbances.
We study pMDPs with reachability objectives where the parameter values are
unknown and impossible to measure directly during execution, but there is a
probability distribution known over the parameter values. We study for the
first time computing parameter-independent strategies that are expectation
optimal, i.e., optimize the expected reachability probability under the
probability distribution over the parameters. We present an encoding of our
problem to partially observable MDPs (POMDPs), i.e., a reduction of our problem
to computing optimal strategies in POMDPs.
We evaluate our method experimentally on several benchmarks: a motivating
(repeated) learner model; a series of benchmarks of varying configurations of a
robot moving on a grid; and a consensus protocol.Comment: Extended version of a QEST 2018 pape
Reactive Petri Nets for Workflow Modeling
Petri nets are widely used for modeling and analyzing workflows
Change Mining in Adaptive Process Management Systems
The wide-spread adoption of process-aware information systems has resulted in a bulk of computerized information about real-world processes. This data can be utilized for process performance analysis as well as for process improvement. In this context process mining offers promising perspectives. So far, existing mining techniques have been applied to operational processes, i.e., knowledge is extracted from execution logs (process discovery), or execution logs are compared with some a-priori process model (conformance checking). However, execution logs only constitute one kind of data gathered during process enactment. In particular, adaptive processes provide additional information about process changes (e.g., ad-hoc changes of single process instances) which can be used to enable organizational learning. In this paper we present an approach for mining change logs in adaptive process management systems. The change process discovered through process mining provides an aggregated overview of all changes that happened so far. This, in turn, can serve as basis for all kinds of process improvement actions, e.g., it may trigger process redesign or better control mechanisms
Equilibria-based Probabilistic Model Checking for Concurrent Stochastic Games
Probabilistic model checking for stochastic games enables formal verification
of systems that comprise competing or collaborating entities operating in a
stochastic environment. Despite good progress in the area, existing approaches
focus on zero-sum goals and cannot reason about scenarios where entities are
endowed with different objectives. In this paper, we propose probabilistic
model checking techniques for concurrent stochastic games based on Nash
equilibria. We extend the temporal logic rPATL (probabilistic alternating-time
temporal logic with rewards) to allow reasoning about players with distinct
quantitative goals, which capture either the probability of an event occurring
or a reward measure. We present algorithms to synthesise strategies that are
subgame perfect social welfare optimal Nash equilibria, i.e., where there is no
incentive for any players to unilaterally change their strategy in any state of
the game, whilst the combined probabilities or rewards are maximised. We
implement our techniques in the PRISM-games tool and apply them to several case
studies, including network protocols and robot navigation, showing the benefits
compared to existing approaches
Automated verification of concurrent stochastic games
We present automatic verifcation techniques for concurrent
stochastic multi-player games (CSGs) with rewards. To express properties
of such models, we adapt the temporal logic rPATL (probabilistic
alternating-time temporal logic with rewards), originally introduced for
the simpler model of turn-based games, which enables quantitative reasoning
about the ability of coalitions of players to achieve goals related to
the probability of an event or reward measures. We propose and implement
a modelling approach and model checking algorithms for property
verifcation and strategy synthesis of CSGs, as an extension of PRISMgames.
We evaluate the performance, scalability and applicability of our
techniques on case studies from domains such as security, networks and
finance, showing that we can analyse systems with probabilistic, cooperative
and competitive behaviour between concurrent components, including
many scenarios that cannot be analysed with turn-based models
LNCS
Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with n states and m transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)⋅t2) time, given a tree decomposition of the MC with width t. Our results also imply a bound of O(κ⋅(n+m)⋅t2) for each objective on MDPs, where κ is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experiments show that on low-treewidth MCs and MDPs, our algorithms outperform existing well-established methods by one or more orders of magnitude
Effects of the COVID-19 lockdowns on the management of coral restoration projects
Coral restoration initiatives are gaining significant momentum in a global effort to enhance the recovery of degraded coral reefs. However, the implementation and upkeep of coral nurseries are particularly demanding, so that unforeseen breaks in maintenance operations might jeopardize well-established projects. In the last 2 years, the COVID-19 pandemic has resulted in a temporary yet prolonged abandonment of several coral gardening infrastructures worldwide, including remote localities. Here we provide a first assessment of the potential impacts of monitoring and maintenance breakdown in a suite of coral restoration projects (based on floating rope nurseries) in Colombia, Seychelles, and Maldives. Our study comprises nine nurseries from six locations, hosting a total of 3,554 fragments belonging to three coral genera, that were left unsupervised for a period spanning from 29 to 61 weeks. Floating nursery structures experienced various levels of damage, and total fragment survival spanned from 40 to 95% among projects, with Pocillopora showing the highest survival rate in all locations present. Overall, our study shows that, under certain conditions, abandoned coral nurseries can remain functional for several months without suffering critical failure from biofouling and hydrodynamism. Still, even where gardening infrastructures were only marginally affected, the unavoidable interruptions in data collection have slowed down ongoing project progress, diminishing previous investments and reducing future funding opportunities. These results highlight the need to increase the resilience and self-sufficiency of coral restoration projects, so that the next global lockdown will not further shrink the increasing efforts to prevent coral reefs from disappearing.Peer reviewe
Accelerated Model Checking of Parametric Markov Chains
Parametric Markov chains occur quite naturally in various applications: they
can be used for a conservative analysis of probabilistic systems (no matter how
the parameter is chosen, the system works to specification); they can be used
to find optimal settings for a parameter; they can be used to visualise the
influence of system parameters; and they can be used to make it easy to adjust
the analysis for the case that parameters change. Unfortunately, these
advancements come at a cost: parametric model checking is---or rather
was---often slow. To make the analysis of parametric Markov models scale, we
need three ingredients: clever algorithms, the right data structure, and good
engineering. Clever algorithms are often the main (or sole) selling point; and
we face the trouble that this paper focuses on -- the latter ingredients to
efficient model checking. Consequently, our easiest claim to fame is in the
speed-up we have often realised when comparing to the state of the art
Value Iteration for Simple Stochastic Games: Stopping Criterion and Learning Algorithm
Simple stochastic games can be solved by value iteration (VI), which yields a
sequence of under-approximations of the value of the game. This sequence is
guaranteed to converge to the value only in the limit. Since no stopping
criterion is known, this technique does not provide any guarantees on its
results. We provide the first stopping criterion for VI on simple stochastic
games. It is achieved by additionally computing a convergent sequence of
over-approximations of the value, relying on an analysis of the game graph.
Consequently, VI becomes an anytime algorithm returning the approximation of
the value and the current error bound. As another consequence, we can provide a
simulation-based asynchronous VI algorithm, which yields the same guarantees,
but without necessarily exploring the whole game graph.Comment: CAV201
Equilibria-based probabilistic model checking for concurrent stochastic games
Probabilistic model checking for stochastic games enables formal verification of systems that comprise competing or collaborating entities operating in a stochastic environment. Despite good progress in the area, existing approaches focus on zero-sum goals and cannot reason about scenarios where entities are endowed with different objectives. In this paper, we propose probabilistic model checking techniques for concurrent stochastic games based on Nash equilibria. We extend the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards) to allow reasoning about players with distinct quantitative goals, which capture either the probability of an event occurring or a reward measure. We present algorithms to synthesise strategies that are subgame perfect social welfare optimal Nash equilibria, i.e., where there is no incentive for any players to unilaterally change their strategy in any state of the game, whilst the combined probabilities or rewards are maximised. We implement our techniques in the PRISM-games tool and apply them to several case studies, including network protocols and robot navigation, showing the benefits compared to existing approaches
- …