514 research outputs found
On Frequency LTL in Probabilistic Systems
We study frequency linear-time temporal logic (fLTL) which extends the
linear-time temporal logic (LTL) with a path operator expressing that on
a path, certain formula holds with at least a given frequency p, thus relaxing
the semantics of the usual G operator of LTL. Such logic is particularly useful
in probabilistic systems, where some undesirable events such as random failures
may occur and are acceptable if they are rare enough.
Frequency-related extensions of LTL have been previously studied by several
authors, where mostly the logic is equipped with an extended "until" and
"globally" operator, leading to undecidability of most interesting problems.
For the variant we study, we are able to establish fundamental decidability
results. We show that for Markov chains, the problem of computing the
probability with which a given fLTL formula holds has the same complexity as
the analogous problem for LTL. We also show that for Markov decision processes
the problem becomes more delicate, but when restricting the frequency bound
to be 1 and negations not to be outside any operator, we can compute the
maximum probability of satisfying the fLTL formula. This can be again performed
with the same time complexity as for the ordinary LTL formulas.Comment: A paper presented at CONCUR 2015, with appendi
Conditional Value-at-Risk for Reachability and Mean Payoff in Markov Decision Processes
We present the conditional value-at-risk (CVaR) in the context of Markov
chains and Markov decision processes with reachability and mean-payoff
objectives. CVaR quantifies risk by means of the expectation of the worst
p-quantile. As such it can be used to design risk-averse systems. We consider
not only CVaR constraints, but also introduce their conjunction with
expectation constraints and quantile constraints (value-at-risk, VaR). We
derive lower and upper bounds on the computational complexity of the respective
decision problems and characterize the structure of the strategies in terms of
memory and randomization
Trading Performance for Stability in Markov Decision Processes
We study the complexity of central controller synthesis problems for
finite-state Markov decision processes, where the objective is to optimize both
the expected mean-payoff performance of the system and its stability.
We argue that the basic theoretical notion of expressing the stability in
terms of the variance of the mean-payoff (called global variance in our paper)
is not always sufficient, since it ignores possible instabilities on respective
runs. For this reason we propose alernative definitions of stability, which we
call local and hybrid variance, and which express how rewards on each run
deviate from the run's own mean-payoff and from the expected mean-payoff,
respectively.
We show that a strategy ensuring both the expected mean-payoff and the
variance below given bounds requires randomization and memory, under all the
above semantics of variance. We then look at the problem of determining whether
there is a such a strategy. For the global variance, we show that the problem
is in PSPACE, and that the answer can be approximated in pseudo-polynomial
time. For the hybrid variance, the analogous decision problem is in NP, and a
polynomial-time approximating algorithm also exists. For local variance, we
show that the decision problem is in NP. Since the overall performance can be
traded for stability (and vice versa), we also present algorithms for
approximating the associated Pareto curve in all the three cases.
Finally, we study a special case of the decision problems, where we require a
given expected mean-payoff together with zero variance. Here we show that the
problems can be all solved in polynomial time.Comment: Extended version of a paper presented at LICS 201
Game Characterization of Probabilistic Bisimilarity, and Applications to Pushdown Automata
We study the bisimilarity problem for probabilistic pushdown automata (pPDA)
and subclasses thereof. Our definition of pPDA allows both probabilistic and
non-deterministic branching, generalising the classical notion of pushdown
automata (without epsilon-transitions). We first show a general
characterization of probabilistic bisimilarity in terms of two-player games,
which naturally reduces checking bisimilarity of probabilistic labelled
transition systems to checking bisimilarity of standard (non-deterministic)
labelled transition systems. This reduction can be easily implemented in the
framework of pPDA, allowing to use known results for standard
(non-probabilistic) PDA and their subclasses. A direct use of the reduction
incurs an exponential increase of complexity, which does not matter in deriving
decidability of bisimilarity for pPDA due to the non-elementary complexity of
the problem. In the cases of probabilistic one-counter automata (pOCA), of
probabilistic visibly pushdown automata (pvPDA), and of probabilistic basic
process algebras (i.e., single-state pPDA) we show that an implicit use of the
reduction can avoid the complexity increase; we thus get PSPACE, EXPTIME, and
2-EXPTIME upper bounds, respectively, like for the respective non-probabilistic
versions. The bisimilarity problems for OCA and vPDA are known to have matching
lower bounds (thus being PSPACE-complete and EXPTIME-complete, respectively);
we show that these lower bounds also hold for fully probabilistic versions that
do not use non-determinism
Markov Decision Processes with Multiple Long-run Average Objectives
We study Markov decision processes (MDPs) with multiple limit-average (or
mean-payoff) functions. We consider two different objectives, namely,
expectation and satisfaction objectives. Given an MDP with k limit-average
functions, in the expectation objective the goal is to maximize the expected
limit-average value, and in the satisfaction objective the goal is to maximize
the probability of runs such that the limit-average value stays above a given
vector. We show that under the expectation objective, in contrast to the case
of one limit-average function, both randomization and memory are necessary for
strategies even for epsilon-approximation, and that finite-memory randomized
strategies are sufficient for achieving Pareto optimal values. Under the
satisfaction objective, in contrast to the case of one limit-average function,
infinite memory is necessary for strategies achieving a specific value (i.e.
randomized finite-memory strategies are not sufficient), whereas memoryless
randomized strategies are sufficient for epsilon-approximation, for all
epsilon>0. We further prove that the decision problems for both expectation and
satisfaction objectives can be solved in polynomial time and the trade-off
curve (Pareto curve) can be epsilon-approximated in time polynomial in the size
of the MDP and 1/epsilon, and exponential in the number of limit-average
functions, for all epsilon>0. Our analysis also reveals flaws in previous work
for MDPs with multiple mean-payoff functions under the expectation objective,
corrects the flaws, and allows us to obtain improved results
Solvency Markov Decision Processes with Interest
Solvency games, introduced by Berger et al., provide an abstract framework for modelling decisions of a risk-averse investor, whose goal is to avoid ever going broke. We study a new variant of this model, where, in addition to stochastic environment and fixed increments and decrements to the investor\u27s wealth, we introduce
interest, which is earned or paid on the current level of savings or debt, respectively.
We study problems related to the minimum initial wealth sufficient to avoid bankruptcy (i.e. steady decrease of the wealth) with probability at least p. We present an exponential time algorithm which approximates this minimum initial wealth, and show that a polynomial time approximation is not possible unless P=NP.
For the qualitative case, i.e. p=1, we show that the problem whether a given number is larger than or equal to the minimum initial wealth belongs to NP cap coNP, and show that a polynomial time algorithm would yield a polynomial time algorithm for mean-payoff games, existence of which is a longstanding open problem. We also identify some classes of solvency MDPs for which this problem is in P. In all above cases the algorithms also give corresponding bankruptcy avoiding strategies
Construction and characterization of a genomic BAC library for the Mus m. musculus mouse subspecies (PWD/Ph inbred strain)
BACKGROUND: The genome of classical laboratory strains of mice is an artificial mosaic of genomes originated from several mouse subspecies with predominant representation (>90%) of the Mus m. domesticus component. Mice of another subspecies, East European/Asian Mus m. musculus, can interbreed with the classical laboratory strains to generate hybrids with unprecedented phenotypic and genotypic variations. To study these variations in depth we prepared the first genomic large insert BAC library from an inbred strain derived purely from the Mus m. musculus-subspecies. The library will be used to seek and characterize genomic sequences controlling specific monogenic and polygenic complex traits, including modifiers of dominant and recessive mutations. RESULTS: A representative mouse genomic BAC library was derived from a female mouse of the PWD/Ph inbred strain of Mus m. musculus subspecies. The library consists of 144 768 primary clones from which 97% contain an insert of 120 kb average size. The library represents an equivalent of 6.7 Ă mouse haploid genome, as estimated from the total number of clones carrying genomic DNA inserts and from the average insert size. The clones were arrayed in duplicates onto eight high-density membranes that were screened with seven single-copy gene probes. The individual probes identified four to eleven positive clones, corresponding to 6.9-fold coverage of the mouse genome. Eighty-seven BAC-ends of PWD/Ph clones were sequenced, edited, and aligned with mouse C57BL/6J (B6) genome. Seventy-three BAC-ends displayed unique hits on B6 genome and their alignment revealed 0.92 single nucleotide polymorphisms (SNPs) per 100 bp. Insertions and deletions represented 0.3% of the BAC end sequences. CONCLUSION: Analysis of the novel genomic library for the PWD/Ph inbred strain demonstrated coverage of almost seven mouse genome equivalents and a capability to recover clones for specific regions of PWD/Ph genome. The single nucleotide polymorphism between the strains PWD/Ph and C57BL/6J was 0.92/100 bp, a value significantly higher than between classical laboratory strains. The library will serve as a resource for dissecting the phenotypic and genotypic variations between mice of the Mus m. musculus subspecies and classical laboratory mouse strains
Segmental Trisomy of Mouse Chromosome 17: Introducing an Alternative Model of Downâs Syndrome
All of the mouse models of human trisomy 21 syndrome that have been studied so far are based on segmental trisomies, encompassing, to a varying extent, distal
chromosome 16. Their comparison with one or more unrelated and non-overlapping
segmental trisomies may help to distinguish the effects of specific triplicated genes
from the phenotypes caused by less specific developmental instability mechanisms. In
this paper, the Ts43H segmental trisomy of mouse chromosome 17 is presented as such
an alternative model. The trisomy stretches over 32.5 Mb of proximal chromosome
17 and includes 486 genes. The triplicated interval carries seven blocks of synteny
with five human chromosomes. The block syntenic to human chromosome 21 contains
20 genes
Efficient computation of exact solutions for quantitative model checking
Quantitative model checkers for Markov Decision Processes typically use
finite-precision arithmetic. If all the coefficients in the process are
rational numbers, then the model checking results are rational, and so they can
be computed exactly. However, exact techniques are generally too expensive or
limited in scalability. In this paper we propose a method for obtaining exact
results starting from an approximated solution in finite-precision arithmetic.
The input of the method is a description of a scheduler, which can be obtained
by a model checker using finite precision. Given a scheduler, we show how to
obtain a corresponding basis in a linear-programming problem, in such a way
that the basis is optimal whenever the scheduler attains the worst-case
probability. This correspondence is already known for discounted MDPs, we show
how to apply it in the undiscounted case provided that some preprocessing is
done. Using the correspondence, the linear-programming problem can be solved in
exact arithmetic starting from the basis obtained. As a consequence, the method
finds the worst-case probability even if the scheduler provided by the model
checker was not optimal. In our experiments, the calculation of exact solutions
from a candidate scheduler is significantly faster than the calculation using
the simplex method under exact arithmetic starting from a default basis.Comment: In Proceedings QAPL 2012, arXiv:1207.055
- âŠ