108 research outputs found
Service Composition for Collective Adaptive Systems
Collective adaptive systems are large-scale resource-sharing systems which adapt to the demands of their users by redistributing resources to balance load or provide alternative services where the current provision is perceived to be insufficient. Smart transport systems are a primary example where real-time location tracking systems record the location availability of assets such as cycles for hire, or fleet vehicles such as buses, trains and trams. We consider the problem of an informed user optimising his journey using a composition of services offered by different service providers
ASCENS: Engineering Autonomic Service-Component Ensembles
Today’s developers often face the demanding task of developing software for ensembles: systems with massive numbers of nodes, operating in open and non-deterministic environments with complex interactions, and the need to dynamically adapt to new requirements, technologies or environmental conditions without redeployment and without interruption of the system’s functionality. Conventional development approaches and languages do not provide adequate support for the problems posed by this challenge. The goal of the ASCENS project is to develop a coherent, integrated set of methods and tools to build software for ensembles. To this end we research foundational issues that arise during the development of these kinds of systems, and we build mathematical models that address them. Based on these theories we design a family of languages for engineering ensembles, formal methods that can handle the size, complexity and adaptivity required by ensembles, and software-development methods that provide guidance for developers. In this paper we provide an overview of several research areas of ASCENS: the SOTA approach to ensemble engineering and the underlying formal model called GEM, formal notions of adaptation and awareness, the SCEL language, quantitative analysis of ensembles, and finally software-engineering methods for ensembles
Formal lumping of polynomial differential equations through approximate equivalences
It is well known that exact notions of model abstraction and reduction for dynamical systems may not be robust enough in practice because they are highly sensitive to the specific choice of parameters. In this paper we consider this problem for nonlinear ordinary differential equations (ODEs) with polynomial derivatives. We introduce a model reduction technique based on approximate differential equivalence, i.e., a partition of the set of ODE variables that performs an aggregation when the variables are governed by nearby derivatives. We develop algorithms to (i) compute the largest approximate differential equivalence; (ii) construct an approximately reduced model from the original one via an appropriate perturbation of the coefficients of the polynomials; and (iii) provide a formal certificate on the quality of the approximation as an error bound, computed as an over-approximation of the reachable set of the reduced model. Finally, we apply approximate differential equivalences to case studies on electric circuits, biological models, and polymerization reaction networks
D-SPACE4Cloud: A Design Tool for Big Data Applications
The last years have seen a steep rise in data generation worldwide, with the
development and widespread adoption of several software projects targeting the
Big Data paradigm. Many companies currently engage in Big Data analytics as
part of their core business activities, nonetheless there are no tools and
techniques to support the design of the underlying hardware configuration
backing such systems. In particular, the focus in this report is set on Cloud
deployed clusters, which represent a cost-effective alternative to on premises
installations. We propose a novel tool implementing a battery of optimization
and prediction techniques integrated so as to efficiently assess several
alternative resource configurations, in order to determine the minimum cost
cluster deployment satisfying QoS constraints. Further, the experimental
campaign conducted on real systems shows the validity and relevance of the
proposed method
Lumpability for Uncertain Continuous-Time Markov Chains
The assumption of perfect knowledge of rate parameters in continuous-time Markov chains (CTMCs) is undermined when confronted with reality, where they may be uncertain due to lack of information or because of measurement noise. In this paper we consider uncertain CTMCs, where rates are assumed to vary non-deterministically with time from bounded continuous intervals. This leads to a semantics which associates each state with the reachable set of its probability under all possible choices of the uncertain rates. We develop a notion of lumpability which identifies a partition of states where each block preserves the reachable set of the sum of its probabilities, essentially lifting the well-known CTMC ordinary lumpability to the uncertain setting. We proceed with this analogy with two further contributions: a logical characterization of uncertain CTMC lumping in terms of continuous stochastic logic; and a polynomial time and space algorithm for the minimization of uncertain CTMCs by partition refinement, using the CTMC lumping algorithm as an inner step. As a case study, we show that the minimizations in a substantial number of CTMC models reported in the literature are robust with respect to uncertainties around their original, fixed, rate values
Analysis of Petri Net Models through Stochastic Differential Equations
It is well known, mainly because of the work of Kurtz, that density dependent
Markov chains can be approximated by sets of ordinary differential equations
(ODEs) when their indexing parameter grows very large. This approximation
cannot capture the stochastic nature of the process and, consequently, it can
provide an erroneous view of the behavior of the Markov chain if the indexing
parameter is not sufficiently high. Important phenomena that cannot be revealed
include non-negligible variance and bi-modal population distributions. A
less-known approximation proposed by Kurtz applies stochastic differential
equations (SDEs) and provides information about the stochastic nature of the
process. In this paper we apply and extend this diffusion approximation to
study stochastic Petri nets. We identify a class of nets whose underlying
stochastic process is a density dependent Markov chain whose indexing parameter
is a multiplicative constant which identifies the population level expressed by
the initial marking and we provide means to automatically construct the
associated set of SDEs. Since the diffusion approximation of Kurtz considers
the process only up to the time when it first exits an open interval, we extend
the approximation by a machinery that mimics the behavior of the Markov chain
at the boundary and allows thus to apply the approach to a wider set of
problems. The resulting process is of the jump-diffusion type. We illustrate by
examples that the jump-diffusion approximation which extends to bounded domains
can be much more informative than that based on ODEs as it can provide accurate
quantity distributions even when they are multi-modal and even for relatively
small population levels. Moreover, we show that the method is faster than
simulating the original Markov chain
Process algebra modelling styles for biomolecular processes
We investigate how biomolecular processes are modelled in process algebras, focussing on chemical reactions. We consider various modelling styles and how design decisions made in the definition of the process algebra have an impact on how a modelling style can be applied. Our goal is to highlight the often implicit choices that modellers make in choosing a formalism, and illustrate, through the use of examples, how this can affect expressability as well as the type and complexity of the analysis that can be performed
Performance Degradation and Cost Impact Evaluation of Privacy Preserving Mechanisms in Big Data Systems
Big Data is an emerging area and concerns managing datasets whose size is beyond commonly used software tools ability to capture, process, and perform analyses in a timely way. The Big Data software market is growing at 32% compound annual rate, almost four times more than the whole ICT market, and the quantity of data to be analyzed is expected to double every two years.
Security and privacy are becoming very urgent Big Data aspects that need to be tackled. Indeed, users share more and more personal data and user-generated content through their mobile devices and computers to social networks and cloud services, losing data and content control with a serious impact on their own privacy. Privacy is one area that had a serious debate recently, and many governments require data providers and companies to protect users’ sensitive data. To mitigate these problems, many solutions have been developed to provide data privacy but, unfortunately, they introduce some computational overhead when data is processed.
The goal of this paper is to quantitatively evaluate the performance and cost impact of multiple privacy protection mechanisms. A real industry case study concerning tax fraud detection has been considered. Many experiments have been performed to analyze the performance degradation and additional cost (required to provide a given service level) for running applications in a cloud system
- …