27,532 research outputs found
Diagnosis, synthesis and analysis of probabilistic models
This dissertation considers three important aspects of model checking Markov models:\ud
diagnosis — generating counterexamples, synthesis — providing valid parameter\ud
values and analysis — verifying linear real-time properties. The three aspects are relatively\ud
independent while all contribute to developing new theory and algorithms in the\ud
research field of probabilistic model checking.\ud
We start by introducing a formal definition of counterexamples in the setting of\ud
probabilistic model checking. We transform the problem of finding informative counterexamples\ud
to shortest path problems. A framework is explored and provided for\ud
generating such counterexamples. We then investigate a more compact representation\ud
of counterexamples by regular expressions. Heuristic based algorithms are applied to\ud
obtain short regular expression counterexamples. In the end of this part, we extend\ud
the definition and counterexample generation algorithms to various combinations of\ud
probabilistic models and logics.\ud
We move on to the problem of synthesizing values for parametric continuous-time\ud
Markov chains (pCTMCs) wrt. time-bounded reachability specifications. The rates\ud
in the pCTMCs are expressed by polynomials over reals with parameters and the\ud
main question is to find all the parameter values (forming a synthesis region) with\ud
which the specification is satisfied. We first present a symbolic approach where the\ud
intersection points are computed by solving polynomial equations and then connected\ud
to approximate the synthesis region. An alternative non-symbolic approach based on\ud
interval arithmetic is investigated, where pCTMCs are instantiated. The error bound,\ud
time complexity as well as some experimental results have been provided, followed by\ud
a detailed comparison of the two approaches.\ud
In the last part, we focus on verifying CTMCs against linear real-time properties\ud
specified by deterministic timed automata (DTAs). The model checking problem aims\ud
at computing the probability of the set of paths in CTMC C that can be accepted\ud
by DTA A, denoted PathsC(A). We consider DTAs with reachability (finite, DTA♦)\ud
and Muller (infinite, DTAω) acceptance conditions, respectively. It is shown that\ud
PathsC(A) is measurable and computing its probability for DTA♦ can be reduced to\ud
computing the reachability probability in a piecewise deterministic Markov process\ud
(PDP). The reachability probability is characterized as the least solution of a system\ud
of integral equations and is shown to be approximated by solving a system of PDEs.\ud
Furthermore, we show that the special case of single-clock DTA♦ can be simplified to\ud
solving a system of linear equations. We also deal with DTAω specifications, where the\ud
problem is proven to be reducible to the reachability problem as in the DTA♦ case
Causality and Temporal Dependencies in the Design of Fault Management Systems
Reasoning about causes and effects naturally arises in the engineering of
safety-critical systems. A classical example is Fault Tree Analysis, a
deductive technique used for system safety assessment, whereby an undesired
state is reduced to the set of its immediate causes. The design of fault
management systems also requires reasoning on causality relationships. In
particular, a fail-operational system needs to ensure timely detection and
identification of faults, i.e. recognize the occurrence of run-time faults
through their observable effects on the system. Even more complex scenarios
arise when multiple faults are involved and may interact in subtle ways.
In this work, we propose a formal approach to fault management for complex
systems. We first introduce the notions of fault tree and minimal cut sets. We
then present a formal framework for the specification and analysis of
diagnosability, and for the design of fault detection and identification (FDI)
components. Finally, we review recent advances in fault propagation analysis,
based on the Timed Failure Propagation Graphs (TFPG) formalism.Comment: In Proceedings CREST 2017, arXiv:1710.0277
Probabilistic Dynamic Logic of Phenomena and Cognition
The purpose of this paper is to develop further the main concepts of
Phenomena Dynamic Logic (P-DL) and Cognitive Dynamic Logic (C-DL), presented in
the previous paper. The specific character of these logics is in matching
vagueness or fuzziness of similarity measures to the uncertainty of models.
These logics are based on the following fundamental notions: generality
relation, uncertainty relation, simplicity relation, similarity maximization
problem with empirical content and enhancement (learning) operator. We develop
these notions in terms of logic and probability and developed a Probabilistic
Dynamic Logic of Phenomena and Cognition (P-DL-PC) that relates to the scope of
probabilistic models of brain. In our research the effectiveness of suggested
formalization is demonstrated by approximation of the expert model of breast
cancer diagnostic decisions. The P-DL-PC logic was previously successfully
applied to solving many practical tasks and also for modelling of some
cognitive processes.Comment: 6 pages, WCCI 2010 IEEE World Congress on Computational Intelligence
July, 18-23, 2010 - CCIB, Barcelona, Spain, IJCNN, IEEE Catalog Number:
CFP1OUS-DVD, ISBN: 978-1-4244-6917-8, pp. 3361-336
Memory-Efficient Topic Modeling
As one of the simplest probabilistic topic modeling techniques, latent
Dirichlet allocation (LDA) has found many important applications in text
mining, computer vision and computational biology. Recent training algorithms
for LDA can be interpreted within a unified message passing framework. However,
message passing requires storing previous messages with a large amount of
memory space, increasing linearly with the number of documents or the number of
topics. Therefore, the high memory usage is often a major problem for topic
modeling of massive corpora containing a large number of topics. To reduce the
space complexity, we propose a novel algorithm without storing previous
messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP
relates the message passing algorithms with the non-negative matrix
factorization (NMF) algorithms, which absorb the message updating into the
message passing process, and thus avoid storing previous messages. Experimental
results on four large data sets confirm that TBP performs comparably well or
even better than current state-of-the-art training algorithms for LDA but with
a much less memory consumption. TBP can do topic modeling when massive corpora
cannot fit in the computer memory, for example, extracting thematic topics from
7 GB PUBMED corpora on a common desktop computer with 2GB memory.Comment: 20 pages, 7 figure
- …