5 research outputs found
Coalgebraic Behavioral Metrics
We study different behavioral metrics, such as those arising from both
branching and linear-time semantics, in a coalgebraic setting. Given a
coalgebra for a functor , we define a framework for deriving pseudometrics on which
measure the behavioral distance of states.
A crucial step is the lifting of the functor on to a
functor on the category of pseudometric spaces.
We present two different approaches which can be viewed as generalizations of
the Kantorovich and Wasserstein pseudometrics for probability measures. We show
that the pseudometrics provided by the two approaches coincide on several
natural examples, but in general they differ.
If has a final coalgebra, every lifting yields in a
canonical way a behavioral distance which is usually branching-time, i.e., it
generalizes bisimilarity. In order to model linear-time metrics (generalizing
trace equivalences), we show sufficient conditions for lifting distributive
laws and monads. These results enable us to employ the generalized powerset
construction
Towards Dynamic Dependable Systems through Evidence-Based Continuous Certification
International audienceFuture cyber-physical systems are expected to be dynamic, evolving while already being deployed. Frequent updates of software components are likely to become the norm even for safety-critical systems. In this setting, a full re-certification before each software update might delay important updates that fix previous bugs, or security or safety issues. Here we propose a vision addressing this challenge, namely through the evidence-based continuous supervision and certification of software variants in the field. The idea is to run both old and new variants of component software inside the same system, together with a supervising instance that monitors their behavior. Updated variants are phased into operation after sufficient evidence for correct behavior has been collected. The variants are required to explicate their decisions in a logical language, enabling the supervisor to reason about these decisions and to identify inconsistencies. To resolve contradictory information, the supervisor can run a component analysis to identify potentially faulty components on the basis of previously observed behavior, and can trigger micro-experiments which plan and execute system behavior specifically aimed at reducing uncertainty. We spell out our overall vision, and provide a first formalization of the different components and their interplay. In order to provide efficient supervisor reasoning as well as automatic verification of supervisor properties we introduce SupERLog, a logic specifically designed to this end
Performance analysis of large-scale resource-bound computer systems
We present an analysis framework for performance evaluation of large-scale resource-bound
(LSRB) computer systems. LSRB systems are those whose resources are continually
in demand to serve resource users, who appear in large populations and cause
high contention. In these systems, the delivery of quality service is crucial, even in
the event of resource failure. Therefore, various techniques have been developed for
evaluating their performance. In this thesis, we focus on the technique of quantitative
modelling, where in order to study a system, first its model is constructed and then the
system’s behaviour is analysed via the model.
A number of high level formalisms have been developed to aid the task of model
construction. We focus on PEPA, a stochastic process algebra that supports compositionality
and enables us to easily build complex LSRB models. In spite of this advantage,
however, the task of analysing LSRB models still poses unresolved challenges.
LSRB models give rise to very large state spaces. This issue, known as the state
space explosion problem, renders the techniques based on discrete state representation,
such as numerical Markovian analysis, computationally expensive. Moreover,
simulation techniques, such as Gillespie’s stochastic simulation algorithm, are also
computationally demanding, as numerous trajectories need to be collected.
Furthermore, as we show in our first contribution, the techniques based on the
mean-field theory or fluid flow approximation are not readily applicable to this case.
In LSRB models, resources are not assumed to be present in large populations and
models exhibit highly noisy and stochastic behaviour. Thus, the mean-field deterministic
behaviour might not be faithful in capturing the system’s randomness and is
potentially too crude to show important aspects of their behaviours. In this case, the
modeller is unable to obtain important performance indicators, such as the reliability
measures of the system. Considering these limitations, we contribute the following
analytical methods particularly tailored to LSRB models.
First, we present an aggregation method. The aggregated model captures the evolution
of only the system’s resources and allows us to efficiently derive a probability
distribution over the configurations they experience. This distribution provides full
faithfulness for studying the stochastic behaviour of resources. The aggregation can be
applied to all LSRB models that satisfy a syntactic aggregation condition, which can
be quickly checked syntactically. We present an algorithm to generate the aggregated
model from the original model when this condition is satisfied.
Second, we present a procedure to efficiently detect time-scale near-complete decomposability
(TSND). The method of TSND allows us to analyse LSRB models at
a reduced cost, by dividing their state spaces into loosely coupled blocks. However,
one important input is a partition of the transitions defined in the model, categorising
them into slow or fast. Forming the necessary partition by the analysis of the model’s
complete state space is costly. Our process derives this partition efficiently, by relying
on a theorem stating that our aggregation preserves the original model’s partition and
therefore, it can be derived by an efficient reachability analysis on the aggregated state
space. We also propose a clustering algorithm to implement this reachability analysis.
Third, we present the method of conditional moments (MCM) to be used on LSRB
models. Using our aggregation, a probability distribution is formed over the configurations
of a model’s resources. The MCM outputs the time evolution of the conditional
moments of the marginal distribution over resource users given the configurations of
resources. Essentially, for each such configuration, we derive measures such as conditional
expectation, conditional variance, etc. related to the dynamics of users. This
method has a high degree of faithfulness and allows us to capture the impact of the
randomness of the behaviour of resources on the users.
Finally, we present the advantage of the methods we proposed in the context of a
case study, which concerns the performance evaluation of a two-tier wireless network
constructed based on the femto-cell macro-cell architecture