815 research outputs found
Behavioural â based modelling and analysis of Navigation\ud Patterns across Information Networks
Navigation behaviour can be considered as one of the most crucial\ud
aspects of user behaviour in an electronic commerce environment, which is very\ud
good indicator of userâs interests either in the process of browsing or purchasing.\ud
Revealing user navigation patterns is very helpful in finding out a way for\ud
increasing sale, turning the most browsers into buyers, keeping costumerâs\ud
attention, loyalty, adjusting and improving the interface in order to boost the user\ud
experience and interaction with the system. In this regard, this research has\ud
identified the most common user navigation patterns across information networks,\ud
illustrated through the example of an electronic bookstore. A behavioural-based\ud
model that provides profound knowledge about the processes of navigation is\ud
proposed, specifically examined for different types of users, automatically\ud
identified and clustered into two clusters according to their navigational\ud
behaviour. The developed model is based on stochastic modelling using the\ud
concept of Generalized Stochastic Petri Nets which complex solution relies on\ud
Continuous Time Markov Chain. As a result, calculation of several performance\ud
measures is performed, such as: expected time spent in a transient tangible\ud
marking, cumulative sojourn time spent in a transient tangible marking, total\ud
number of visits in a transient tangible marking etc
Modelling epistasis in genetic disease using Petri nets, evolutionary computation and frequent itemset mining
Petri nets are useful for mathematically modelling disease-causing genetic epistasis. A Petri net model of an interaction has the potential to lead to biological insight into the cause of a genetic disease. However, defining a Petri net by hand for a particular interaction is extremely difficult because of the sheer complexity of the problem and degrees of freedom inherent in a Petri netâs architecture.
We propose therefore a novel method, based on evolutionary computation and data mining, for automatically constructing Petri net models of non-linear gene interactions. The method comprises two main steps. Firstly, an initial partial Petri net is set up with several repeated sub-nets that model individual genes and a set of constraints, comprising relevant common sense and biological knowledge, is also defined. These constraints characterise the class of Petri nets that are desired. Secondly, this initial Petri net structure and the constraints are used as the input to a genetic algorithm. The genetic algorithm searches for a Petri net architecture that is both a superset of the initial net, and also conforms to all of the given constraints. The genetic algorithm evaluation function that we employ gives equal weighting to both the accuracy of the net and also its parsimony.
We demonstrate our method using an epistatic model related to the presence of digital ulcers in systemic sclerosis patients that was recently reported in the literature. Our results show that although individual âperfectâ Petri nets can frequently be discovered for this interaction, the true value of this approach lies in generating many different perfect nets, and applying data mining techniques to them in order to elucidate common and statistically significant patterns of interaction
Process algebra for performance evaluation
This paper surveys the theoretical developments in the field of stochastic process algebras, process algebras where action occurrences may be subject to a delay that is determined by a random variable. A huge class of resource-sharing systems â like large-scale computers, clientâserver architectures, networks â can accurately be described using such stochastic specification formalisms. The main emphasis of this paper is the treatment of operational semantics, notions of equivalence, and (sound and complete) axiomatisations of these equivalences for different types of Markovian process algebras, where delays are governed by exponential distributions. Starting from a simple actionless algebra for describing time-homogeneous continuous-time Markov chains, we consider the integration of actions and random delays both as a single entity (like in known Markovian process algebras like TIPP, PEPA and EMPA) and as separate entities (like in the timed process algebras timed CSP and TCCS). In total we consider four related calculi and investigate their relationship to existing Markovian process algebras. We also briefly indicate how one can profit from the separation of time and actions when incorporating more general, non-Markovian distributions
A synthesis of logic and bio-inspired techniques in the design of dependable systems
Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules
Model-driven development of data intensive applications over cloud resources
The proliferation of sensors over the last years has generated large amounts of raw data, forming data streams that need to be processed. In many cases, cloud resources are used for such processing, exploiting their flexibility, but these sensor streaming applications often need to support operational and control actions that have real-time and low-latency requirements that go beyond the cost effective and flexible solutions supported by existing cloud frameworks, such as Apache Kafka, Apache Spark Streaming, or Map-Reduce Streams. In this paper, we describe a model-driven and stepwise refinement methodological approach for streaming applications executed over clouds. The central role is assigned to a set of Petri Net models for specifying functional and non-functional requirements. They support model reuse, and a way to combine formal analysis, simulation, and approximate computation of minimal and maximal boundaries of non-functional requirements when the problem is either mathematically or computationally intractable. We show how our proposal can assist developers in their design and implementation decisions from a performance perspective. Our methodology allows to conduct performance analysis: The methodology is intended for all the engineering process stages, and we can (i) analyse how it can be mapped onto cloud resources, and (ii) obtain key performance indicators, including throughput or economic cost, so that developers are assisted in their development tasks and in their decision taking. In order to illustrate our approach, we make use of the pipelined wavefront array
Recommended from our members
A systems biology approach to multi-scale modelling and analysis of planar cell polarity in drosophila melanogaster wing
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Systems biology aims to describe and understand biology at a global scale where biological systems function as a result of complex mechanisms that happen at several scales. Modelling and simulation are computational tools that are invaluable for description, understanding and prediction these mechanisms in a quantitative and integrative way. Thus multi-scale methods that couple the design, simulation and analysis of models spanning several spatial and temporal scales is becoming a new emerging focus of systems biology. This thesis uses an exemplar â Planar cell polarity (PCP) signalling â to illustrate a generic approach to model biological systems at different spatial scales, using the new concept of Hierarchically Coloured Petri Nets (HCPN). PCP signalling refers to the coordinated polarisation of cells within the plane of various epithelial tissues to generate sub-cellular asymmetry along an axis orthogonal to their apical-basal axes. This polarisation is required for many developmental events in both vertebrates and non-vertebrates. Defects in PCP in vertebrates are responsible for developmental abnormalities in multiple tissues including the neural tube, the kidney and the inner ear. In Drosophila wing, PCP is seen in the parallel orientation of hairs that protrude from each of the approximately 30,000 epithelial cells to robustly point toward the wing tip. This work applies HCPN to model a tissue comprising multiple cells hexagonally packed in a honeycomb formation in order to describe the phenomenon of Planar Cell Polarity (PCP) in Drosophila wing. HCPN facilitate the construction of mathematically tractable, compact and parameterised large-scale models. Different levels of abstraction that can be used in order to simplify such a complex system are first illustrated. The PCP system is first represented at an abstract level without modelling details of the cell. Each cell is then sub-divided into seven virtual compartments with adjacent cells being coupled via the formation of intercellular complexes. A more detailed model is later developed, describing the intra- and inter-cellular signalling mechanisms involved in PCP signalling. The initial model is for a wild-type organism, and then a family of related models, permitting different hypotheses to be explored regarding the mechanisms underlying PCP, are constructed. Among them, the largest model consists of 800 cells which when unfolded yields 164,000 places (each of which is described by an ordinary differential equation). This thesis illustrates the power and validity of the approach by showing how the models can be easily adapted to describe well-documented genetic mutations in the Drosophila wing using the proposed approach including clustering and model checking over time series of primary and secondary data, which can be employed to analyse and check such multi-scale models similar to the case of PCP. The HCPN models support the interpretation of biological observations reported in literature and are able to make sensible predictions. As HCPN model multi-scale systems in a compact, parameterised and scalable way, this modelling approach can be applied to other large-scale or multi-scale systems.This study was funded by Brunel University
Study of decentralised decision models in distributed environments
Many of today's complex systems require effective decision making within uncertain distributed environments. The central theme of the thesis considers the systematic analysis for the representation of decision making organisations. The basic concept of stochastic learning automata provides a framework for modelling decision making in complex systems. Models of interactive decision making are discussed, which result from interconnecting decision makers in both synchronous and sequential configurations. The concepts and viewpoints from learning theory and game theory are used to explain the behaviour of these structures. This work is then extended by presenting a quantitative framework based on Petri Net theory. This formalism provides a powerful means for capturing the information flow in the decision-making process and demonstrating the explicit interactions between decision makers. Additionally, it is also used for the description and analysis of systems that axe characterised as being concurrent, asynchronous, distributed, parallel and/ or stochastic activities. The thesis discusses the limitations of each modelling framework. The thesis proposes an extension to the existing methodologies by presenting a new class of Petri Nets. This extension has resulted in a novel structure which has the additional feature of an embedded stochastic learning automata. An application of this approach to a realistic decision problem demonstrates the impact that the use of an artificial intelligence technique embedded within Petri Nets can have on the performance of decision models
Model-driven development of data intensive applications over cloud resources
The proliferation of sensors over the last years has generated large amounts
of raw data, forming data streams that need to be processed. In many cases,
cloud resources are used for such processing, exploiting their flexibility, but
these sensor streaming applications often need to support operational and
control actions that have real-time and low-latency requirements that go beyond
the cost effective and flexible solutions supported by existing cloud
frameworks, such as Apache Kafka, Apache Spark Streaming, or Map-Reduce
Streams. In this paper, we describe a model-driven and stepwise refinement
methodological approach for streaming applications executed over clouds. The
central role is assigned to a set of Petri Net models for specifying functional
and non-functional requirements. They support model reuse, and a way to combine
formal analysis, simulation, and approximate computation of minimal and maximal
boundaries of non-functional requirements when the problem is either
mathematically or computationally intractable. We show how our proposal can
assist developers in their design and implementation decisions from a
performance perspective. Our methodology allows to conduct performance
analysis: The methodology is intended for all the engineering process stages,
and we can (i) analyse how it can be mapped onto cloud resources, and (ii)
obtain key performance indicators, including throughput or economic cost, so
that developers are assisted in their development tasks and in their decision
taking. In order to illustrate our approach, we make use of the pipelined
wavefront array.Comment: Preprin
Quantitative analysis of distributed systems
PhD ThesisComputing Science addresses the security of real-life systems by using
various security-oriented technologies (e.g., access control solutions
and resource allocation strategies). These security technologies
signficantly increase the operational costs of the organizations in
which systems are deployed, due to the highly dynamic, mobile and
resource-constrained environments. As a result, the problem of designing
user-friendly, secure and high efficiency information systems
in such complex environment has become a major challenge for the
developers.
In this thesis, firstly, new formal models are proposed to analyse the
secure information
flow in cloud computing systems. Then, the opacity of work
flows in cloud computing systems is investigated, a threat
model is built for cloud computing systems, and the information leakage
in such system is analysed. This study can help cloud service
providers and cloud subscribers to analyse the risks they take with
the security of their assets and to make security related decision.
Secondly, a procedure is established to quantitatively evaluate the
costs and benefits of implementing information security technologies.
In this study, a formal system model for data resources in a dynamic
environment is proposed, which focuses on the location of different
classes of data resources as well as the users. Using such a model, the
concurrent and probabilistic behaviour of the system can be analysed.
Furthermore, efficient solutions are provided for the implementation of
information security system based on queueing theory and stochastic
Petri nets. This part of research can help information security officers
to make well judged information security investment decisions
A formalism for describing and simulating systems with interacting components.
This thesis addresses the problem of descriptive complexity presented by systems involving a high number of interacting components. It investigates the evaluation measure of performability and its application to such systems. A new description and simulation language, ICE and it's application to performability modelling is presented. ICE (Interacting ComponEnts) is based upon an earlier description language which was first proposed for defining reliability problems. ICE is declarative in style and has a limited number of keywords. The ethos in the development of the language has been to provide an intuitive formalism with a powerful descriptive space. The full syntax of the language is presented with discussion as to its philosophy. The implementation of a discrete event simulator using an ICE interface is described, with use being made of examples to illustrate the functionality of the code and the semantics of the language. Random numbers are used to provide the required stochastic behaviour within the simulator. The behaviour of an industry standard generator within the simulator and different methods of number allocation are shown. A new generator is proposed that is a development of a fast hardware shift register generator and is demonstrated to possess good statistical properties and operational speed. For the purpose of providing a rigorous description of the language and clarification of its semantics, a computational model is developed using the formalism of extended coloured Petri nets. This model also gives an indication of the language's descriptive power relative to that of a recognised and well developed technique. Some recognised temporal and structural problems of system event modelling are identified. and ICE solutions given. The growing research area of ATM communication networks is introduced and a sophisticated top down model of an ATM switch presented. This model is simulated and interesting results are given. A generic ICE framework for performability modelling is developed and demonstrated. This is considered as a positive contribution to the general field of performability research
- âŠ