1,027,683 research outputs found
Quantitative analysis of distributed systems
PhD ThesisComputing Science addresses the security of real-life systems by using
various security-oriented technologies (e.g., access control solutions
and resource allocation strategies). These security technologies
signficantly increase the operational costs of the organizations in
which systems are deployed, due to the highly dynamic, mobile and
resource-constrained environments. As a result, the problem of designing
user-friendly, secure and high efficiency information systems
in such complex environment has become a major challenge for the
developers.
In this thesis, firstly, new formal models are proposed to analyse the
secure information
flow in cloud computing systems. Then, the opacity of work
flows in cloud computing systems is investigated, a threat
model is built for cloud computing systems, and the information leakage
in such system is analysed. This study can help cloud service
providers and cloud subscribers to analyse the risks they take with
the security of their assets and to make security related decision.
Secondly, a procedure is established to quantitatively evaluate the
costs and benefits of implementing information security technologies.
In this study, a formal system model for data resources in a dynamic
environment is proposed, which focuses on the location of different
classes of data resources as well as the users. Using such a model, the
concurrent and probabilistic behaviour of the system can be analysed.
Furthermore, efficient solutions are provided for the implementation of
information security system based on queueing theory and stochastic
Petri nets. This part of research can help information security officers
to make well judged information security investment decisions
MultiVeStA: Statistical Model Checking for Discrete Event Simulators
The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. Due to the size and complexity of the considered systems, an approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (typically logics) are used to express systems properties of interest. Such properties can then be automatically estimated by tools performing simulations of the model at hand. These property specifications languages, often not popular among engineers, provide a formal, compact and elegant way to express systems properties without needing to hard-code them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with existing discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities
Specifying and analysing reputation systems with coordination languages
Reputation systems are nowadays widely used to support decision making in networked systems. Parties in such systems rate each other and use shared ratings to compute reputation scores that drive their interactions. The existence of reputation systems with remarkable differences calls for formal approaches to their analysis. We present a verification methodology for reputation systems that is based on the use of the coordination language Klaim and related analysis tools. First, we define a parametric Klaim specification of a reputation system that can be instantiated with different reputation models. Then, we consider stochastic specification obtained by considering actions with random (exponentially distributed) duration. The resulting specification enables quantitative analysis of properties of the considered system. Feasibility and effectiveness of our proposal is demonstrated by reporting on the analysis of two reputation models
A framework for the local information dynamics of distributed computation in complex systems
The nature of distributed computation has often been described in terms of
the component operations of universal computation: information storage,
transfer and modification. We review the first complete framework that
quantifies each of these individual information dynamics on a local scale
within a system, and describes the manner in which they interact to create
non-trivial computation where "the whole is greater than the sum of the parts".
We describe the application of the framework to cellular automata, a simple yet
powerful model of distributed computation. This is an important application,
because the framework is the first to provide quantitative evidence for several
important conjectures about distributed computation in cellular automata: that
blinkers embody information storage, particles are information transfer agents,
and particle collisions are information modification events. The framework is
also shown to contrast the computations conducted by several well-known
cellular automata, highlighting the importance of information coherence in
complex computation. The results reviewed here provide important quantitative
insights into the fundamental nature of distributed computation and the
dynamics of complex systems, as well as impetus for the framework to be applied
to the analysis and design of other systems.Comment: 44 pages, 8 figure
MOLNs: A cloud platform for interactive, reproducible and scalable spatial stochastic computational experiments in systems biology using PyURDME
Computational experiments using spatial stochastic simulations have led to
important new biological insights, but they require specialized tools, a
complex software stack, as well as large and scalable compute and data analysis
resources due to the large computational cost associated with Monte Carlo
computational workflows. The complexity of setting up and managing a
large-scale distributed computation environment to support productive and
reproducible modeling can be prohibitive for practitioners in systems biology.
This results in a barrier to the adoption of spatial stochastic simulation
tools, effectively limiting the type of biological questions addressed by
quantitative modeling. In this paper, we present PyURDME, a new, user-friendly
spatial modeling and simulation package, and MOLNs, a cloud computing appliance
for distributed simulation of stochastic reaction-diffusion models. MOLNs is
based on IPython and provides an interactive programming platform for
development of sharable and reproducible distributed parallel computational
experiments
A HOLISTIC APPROACH FOR SECURITY REQUIREMENT SPECIFICATION FOR LOW-COST, DISTRIBUTED UBIQUITOUS SYSTEMS
The class of low-cost, distributed ubiquitous systems represents a computing mode where a system has small, inexpensive networked processing devices, distributed at all scales throughout business activities and everyday life. The unique features of such a class of ubiquitous systems make the security analysis different from that for the centralized computing paradigms. This paper presents a holistic approach for security requirement analysis for low cost, distributed ubiquitous systems. Rigorous security analysis needs both quantitative and qualitative approaches to produce the holistic view and the robust data regarding the security features that a system must have in order to meet users’ security expectations. Our framework can assist system administrators to specify key security properties for a low-cost, distributed ubiquitous system and to define the specific security requirements for such a system. We applied Bayesian network and stochastic process algebra to incorporate probabilistic analysis to the framework
Recommended from our members
Comparing conventional and distributed approaches to simulation in complex supply-chain health systems
Decision making in modern supply chains can be extremely daunting due to their complex nature. Discrete-event simulation is a technique that can support decision making by providing what-if analysis and evaluation of quantitative data. However, modelling supply chain systems can result in massively large and complicated models that can take a very long time to run even with today's powerful desktop computers. Distributed simulation has been suggested as a possible solution to this problem, by enabling the use of multiple computers to run models. To investigate this claim, this paper presents experiences in implementing a simulation model with a 'conventional' approach and with a distributed approach. This study takes place in a healthcare setting, the supply chain of blood from donor to recipient. The study compares conventional and distributed model execution times of a supply chain model simulated in the simulation package Simul8. The results show that the execution time of the conventional approach increases almost linearly with the size of the system and also the simulation run period. However, the distributed approach to this problem follows a more linear distribution of the execution time in terms of system size and run time and appears to offer a practical alternative. On the basis of this, the paper concludes that distributed simulation can be successfully applied in certain situations
- …