192 research outputs found
An Aggregation Technique for Large-Scale PEPA Models with Non-Uniform Populations
Performance analysis based on modelling consists of two major steps: model
construction and model analysis. Formal modelling techniques significantly aid
model construction but can exacerbate model analysis. In particular, here we
consider the analysis of large-scale systems which consist of one or more
entities replicated many times to form large populations. The replication of
entities in such models can cause their state spaces to grow exponentially to
the extent that their exact stochastic analysis becomes computationally
expensive or even infeasible.
In this paper, we propose a new approximate aggregation algorithm for a class
of large-scale PEPA models. For a given model, the method quickly checks if it
satisfies a syntactic condition, indicating that the model may be solved
approximately with high accuracy. If so, an aggregated CTMC is generated
directly from the model description. This CTMC can be used for efficient
derivation of an approximate marginal probability distribution over some of the
model's populations. In the context of a large-scale client-server system, we
demonstrate the usefulness of our method
Fluid aggregations for Markovian process algebra
Quantitative analysis by means of discrete-state stochastic processes is hindered by the well-known phenomenon of state-space explosion, whereby the size of the state space may have an exponential growth with the number of objects in the model. When the stochastic process underlies a Markovian process algebra model, this problem may be alleviated by suitable notions of behavioural equivalence that induce lumping at the underlying continuous-time Markov chain, establishing an exact relation between a potentially much smaller aggregated chain and the original one. However, in the modelling of massively distributed computer systems, even aggregated chains may be still too large for efficient numerical analysis. Recently this problem has been addressed by fluid techniques, where the Markov chain is approximated by a system of ordinary differential equations (ODEs) whose size does not depend on the number of the objects in the model. The technique has been primarily applied in the case of massively replicated sequential processes with small local state space sizes. This thesis devises two different approaches that broaden the scope of applicability of efficient fluid approximations. Fluid lumpability applies in the case where objects are composites of simple objects, and aggregates the potentially massive, naively constructed ODE system into one whose size is independent from the number of composites in the model. Similarly to quasi and near lumpability, we introduce approximate fluid lumpability that covers ODE systems which can be aggregated after a small perturbation in the parameters. The technique of spatial aggregation, instead, applies to models whose objects perform a random walk on a two-dimensional lattice. Specifically, it is shown that the underlying ODE system, whose size is proportional to the number of the regions, converges to a system of partial differential equations of constant size as the number of regions goes to infinity. This allows for an efficient analysis of large-scale mobile models in continuous space like ad hoc networks and multi-agent systems
On approximating the stochastic behaviour of Markovian process algebra models
Markov chains offer a rigorous mathematical framework to describe systems that exhibit
stochastic behaviour, as they are supported by a plethora of methodologies to
analyse their properties. Stochastic process algebras are high-level formalisms, where
systems are represented as collections of interacting components. This compositional
approach to modelling allows us to describe complex Markov chains using a compact
high-level specification.
There is an increasing need to investigate the properties of complex systems, not
only in the field of computer science, but also in computational biology. To explore
the stochastic properties of large Markov chains is a demanding task in terms of computational
resources. Approximating the stochastic properties can be an effective way
to deal with the complexity of large models. In this thesis, we investigate methodologies
to approximate the stochastic behaviour of Markovian process algebra models.
The discussion revolves around two main topics: approximate state-space aggregation
and stochastic simulation. Although these topics are different in nature, they are both
motivated by the need to efficiently handle complex systems.
Approximate Markov chain aggregation constitutes the formulation of a smaller
Markov chain that approximates the behaviour of the original model. The principal
hypothesis is that states that can be characterised as equivalent can be adequately represented
as a single state. We discuss different notions of approximate state equivalence,
and how each of these can be used as a criterion to partition the state-space
accordingly. Nevertheless, approximate aggregation methods typically require an explicit
representation of the transition matrix, a fact that renders them impractical for
large models. We propose a compositional approach to aggregation, as a means to
efficiently approximate complex Markov models that are defined in a process algebra
specification, PEPA in particular.
Regarding our contributions to Markov chain simulation, we propose an accelerated
method that can be characterised as almost exact, in the sense that it can be
arbitrarily precise. We discuss how it is possible to sample from the trajectory space
rather than the transition space. This approach requires fewer random samples than a
typical simulation algorithm. Most importantly, our approach does not rely on particular
assumptions with respect to the model properties, in contrast to otherwise more
efficient approaches
Performance analysis of large-scale resource-bound computer systems
We present an analysis framework for performance evaluation of large-scale resource-bound
(LSRB) computer systems. LSRB systems are those whose resources are continually
in demand to serve resource users, who appear in large populations and cause
high contention. In these systems, the delivery of quality service is crucial, even in
the event of resource failure. Therefore, various techniques have been developed for
evaluating their performance. In this thesis, we focus on the technique of quantitative
modelling, where in order to study a system, first its model is constructed and then the
system’s behaviour is analysed via the model.
A number of high level formalisms have been developed to aid the task of model
construction. We focus on PEPA, a stochastic process algebra that supports compositionality
and enables us to easily build complex LSRB models. In spite of this advantage,
however, the task of analysing LSRB models still poses unresolved challenges.
LSRB models give rise to very large state spaces. This issue, known as the state
space explosion problem, renders the techniques based on discrete state representation,
such as numerical Markovian analysis, computationally expensive. Moreover,
simulation techniques, such as Gillespie’s stochastic simulation algorithm, are also
computationally demanding, as numerous trajectories need to be collected.
Furthermore, as we show in our first contribution, the techniques based on the
mean-field theory or fluid flow approximation are not readily applicable to this case.
In LSRB models, resources are not assumed to be present in large populations and
models exhibit highly noisy and stochastic behaviour. Thus, the mean-field deterministic
behaviour might not be faithful in capturing the system’s randomness and is
potentially too crude to show important aspects of their behaviours. In this case, the
modeller is unable to obtain important performance indicators, such as the reliability
measures of the system. Considering these limitations, we contribute the following
analytical methods particularly tailored to LSRB models.
First, we present an aggregation method. The aggregated model captures the evolution
of only the system’s resources and allows us to efficiently derive a probability
distribution over the configurations they experience. This distribution provides full
faithfulness for studying the stochastic behaviour of resources. The aggregation can be
applied to all LSRB models that satisfy a syntactic aggregation condition, which can
be quickly checked syntactically. We present an algorithm to generate the aggregated
model from the original model when this condition is satisfied.
Second, we present a procedure to efficiently detect time-scale near-complete decomposability
(TSND). The method of TSND allows us to analyse LSRB models at
a reduced cost, by dividing their state spaces into loosely coupled blocks. However,
one important input is a partition of the transitions defined in the model, categorising
them into slow or fast. Forming the necessary partition by the analysis of the model’s
complete state space is costly. Our process derives this partition efficiently, by relying
on a theorem stating that our aggregation preserves the original model’s partition and
therefore, it can be derived by an efficient reachability analysis on the aggregated state
space. We also propose a clustering algorithm to implement this reachability analysis.
Third, we present the method of conditional moments (MCM) to be used on LSRB
models. Using our aggregation, a probability distribution is formed over the configurations
of a model’s resources. The MCM outputs the time evolution of the conditional
moments of the marginal distribution over resource users given the configurations of
resources. Essentially, for each such configuration, we derive measures such as conditional
expectation, conditional variance, etc. related to the dynamics of users. This
method has a high degree of faithfulness and allows us to capture the impact of the
randomness of the behaviour of resources on the users.
Finally, we present the advantage of the methods we proposed in the context of a
case study, which concerns the performance evaluation of a two-tier wireless network
constructed based on the femto-cell macro-cell architecture
Fluid aggregations for Markovian process algebra
Quantitative analysis by means of discrete-state stochastic processes is hindered by the well-known phenomenon of state-space explosion, whereby the size of the state space may have an exponential growth with the number of objects in the model. When the stochastic process underlies a Markovian process algebra model, this problem may be alleviated by suitable notions of behavioural equivalence that induce lumping at the underlying continuous-time Markov chain, establishing an exact relation between a potentially much smaller aggregated chain and the original one. However, in the modelling of massively distributed computer systems, even aggregated chains may be still too large for efficient numerical analysis. Recently this problem has been addressed by fluid techniques, where the Markov chain is approximated by a system of ordinary differential equations (ODEs) whose size does not depend on the number of the objects in the model. The technique has been primarily applied in the case of massively replicated sequential processes with small local state space sizes. This thesis devises two different approaches that broaden the scope of applicability of efficient fluid approximations. Fluid lumpability applies in the case where objects are composites of simple objects, and aggregates the potentially massive, naively constructed ODE system into one whose size is independent from the number of composites in the model. Similarly to quasi and near lumpability, we introduce approximate fluid lumpability that covers ODE systems which can be aggregated after a small perturbation in the parameters. The technique of spatial aggregation, instead, applies to models whose objects perform a random walk on a two-dimensional lattice. Specifically, it is shown that the underlying ODE system, whose size is proportional to the number of the regions, converges to a system of partial differential equations of constant size as the number of regions goes to infinity. This allows for an efficient analysis of large-scale mobile models in continuous space like ad hoc networks and multi-agent systems
Performance modelling of fairness in IEEE 802.11 wireless LAN protocols
PhD ThesisWireless communication has become a key technology in the modern world, allowing network
services to be delivered in almost any environment, without the need for potentially expensive
and invasive fixed cable solutions. However, the level of performance experienced by wireless
devices varies tremendously on location and time. Understanding the factors which can cause
variability of service is therefore of clear practical and theoretical interest.
In this thesis we explore the performance of the IEEE 802.11 family of wireless protocols,
which have become the de facto standard for Wireless Local Area Networks (WLANs). The
specific performance issue which is investigated is the unfairness which can arise due to the
spatial position of nodes in the network. In this work we characterise unfairness in terms of the
difference in performance (e.g. throughput) experienced by different pairs of communicating
nodes within a network. Models are presented using the Markovian process algebra PEPA which
depict different scenarios with three of the main protocols, IEEE 802.11b, IEEE 802.11g and
IEEE 802.11n. The analysis shows that performance is affected by the presence of other nodes
(including in the well-known hidden node case), by the speed of data and the size of the frames
being transmitted.
The collection of models and analysis in this thesis collectively provides not only an insight
into fairness in IEEE 802.11 networks, but it also represents a significant use case in modelling
network protocols using PEPA. PEPA and other stochastic process algebra are extremely powerful
tools for efficiently specifying models which might be very complex to study using conventional
simulation approaches. Furthermore the tool support for PEPA facilitates the rapid solution of
models to derive key metrics which enable the modeller to gain an understanding of the network
behaviour across a wide range of operating conditions.
From the results we can see that short frames promote a greater fairness due to the more
frequent spaces between frames allowing other senders to transmit. An interesting consequence
of these findings is the observation that varying frame length can play a role in addressing
topological unfairness, which leads to the analysis of a novel model of IEEE 802.11g with
variable frame lengths. While varying frame lengths might not always be practically possible, as
frames need to be long enough for collisions to be detected, IEEE 802.11n supports a number of
mechanisms for frame aggregation, where successive frames may be sent in series with little
or no delay between them. We therefore present a novel model of IEEE 802.11n with frame
aggregation to explore how this approach affects fairness and, potentially, can be used to address
unfairness by allowing affected nodes to transmit longer frame bursts.Kurdistan Region Government of Iraq
(KRG) sponso
Scalable analysis of stochastic process algebra models
The performance modelling of large-scale systems using discrete-state approaches is
fundamentally hampered by the well-known problem of state-space explosion, which
causes exponential growth of the reachable state space as a function of the number
of the components which constitute the model. Because they are mapped onto
continuous-time Markov chains (CTMCs), models described in the stochastic process
algebra PEPA are no exception. This thesis presents a deterministic continuous-state
semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying
mathematics for the performance evaluation. This is suitable for models consisting
of large numbers of replicated components, as the ODE problem size is insensitive
to the actual population levels of the system under study. Furthermore, the ODE is
given an interpretation as the fluid limit of a properly defined CTMC model when the
initial population levels go to infinity. This framework allows the use of existing results
which give error bounds to assess the quality of the differential approximation. The
computation of performance indices such as throughput, utilisation, and average response
time are interpreted deterministically as functions of the ODE solution and are
related to corresponding reward structures in the Markovian setting.
The differential interpretation of PEPA provides a framework that is conceptually
analogous to established approximation methods in queueing networks based on meanvalue
analysis, as both approaches aim at reducing the computational cost of the analysis
by providing estimates for the expected values of the performance metrics of interest.
The relationship between these two techniques is examined in more detail in
a comparison between PEPA and the Layered Queueing Network (LQN) model. General
patterns of translation of LQN elements into corresponding PEPA components are
applied to a substantial case study of a distributed computer system. This model is
analysed using stochastic simulation to gauge the soundness of the translation. Furthermore,
it is subjected to a series of numerical tests to compare execution runtimes
and accuracy of the PEPA differential analysis against the LQN mean-value approximation
method.
Finally, this thesis discusses the major elements concerning the development of a
software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment
for PEPA, including modules for static analysis, explicit state-space exploration,
numerical solution of the steady-state equilibrium of the Markov chain, stochastic
simulation, the differential analysis approach herein presented, and a graphical
framework for model editing and visualisation of performance evaluation results
Performance modelling of applications in a smart environment
PhD ThesisIn today’s world, advanced computing technology has been widely used to improve
our living conditions and facilitate people’s daily activities. Smart environment
technology, including kinds of smart devices and intelligent systems, is now being
researched to provide an advanced intelligent life, easy, comfortable environment.
This thesis is aimed to investigate several related technologies corresponding to the
design of a smart environment. Meanwhile, this thesis also explores different
modelling approaches including formal methods and discrete event simulation.
The core contents of the thesis include performance evaluation of scheduling
policies and capacity planning strategies. The main contribution is in developing a
modelling approach for smart hospital environments. This thesis also provides
valuable experience in the formal modelling and the simulation of large scale
systems.
The chief findings are that the dynamic scheduling policy is proved to be the most
efficient approach in the scheduling process; and a capacity scheme is also verified
as the optimal scheme to obtain the high work efficiency under the condition of
limited human resource.
The main methods used for the performance modelling are Performance Evaluation
Process Algebra (PEPA) and discrete event simulation. A great deal of modelling
tasks was completed with these methods. For the analysis, we adopt both numerical
analysis based on PEPA models and statistical measurements in the simulation
- …