233 research outputs found
Statistical analysis of chemical computational systems with MULTIVESTA and ALCHEMIST
The chemical-oriented approach is an emerging paradigm for programming the behaviour of densely distributed and context-aware devices (e.g. in ecosystems of displays tailored to crowd steering, or to obtain profile-based coordinated visualization). Typically, the evolution of such systems cannot be easily predicted, thus making of paramount importance the availability of techniques and tools supporting prior-to-deployment analysis. Exact analysis techniques do not scale well when the complexity of systems grows: as a consequence, approximated techniques based on simulation assumed a relevant role. This work presents a new simulation-based distributed tool addressing the statistical analysis of such a kind of systems, which has been obtained by chaining two existing tools: MultiVeStA and Alchemist. The former is a recently proposed lightweight tool which allows to enrich existing discrete event simulators with distributed statistical analysis capabilities, while the latter is an efficient simulator for chemical-oriented computational systems. The tool is validated against a crowd steering scenario, and insights on the performance are provided by discussing how these scale distributing the analysis tasks on a multi-core architecture
MultiVeStA: Statistical Model Checking for Discrete Event Simulators
The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. Due to the size and complexity of the considered systems, an approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (typically logics) are used to express systems properties of interest. Such properties can then be automatically estimated by tools performing simulations of the model at hand. These property specifications languages, often not popular among engineers, provide a formal, compact and elegant way to express systems properties without needing to hard-code them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with existing discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities
Simulation of Large Scale Computational Ecosystems with Alchemist: A Tutorial
Many interesting systems in several disciplines can be modeled as networks of nodes that can store and exchange data: pervasive systems, edge computing scenarios, and even biological and bio-inspired systems. These systems feature inherent complexity, and often simulation is the preferred (and sometimes the only) way of investigating their behavior; this is true both in the design phase and in the verification and testing phase. In this tutorial paper, we provide a guide to the simulation of such systems by leveraging Alchemist, an existing research tool used in several works in the literature. We introduce its meta-model and its extensible architecture; we discuss reference examples of increasing complexity; and we finally show how to configure the tool to automatically execute multiple repetitions of simulations with different controlled variables, achieving reliable and reproducible results
A Collective Adaptive Approach to Decentralised k-Coverage in Multi-robot Systems
We focus on the online multi-object k-coverage problem (OMOkC), where mobile robots are required to sense a mobile target from k diverse points of view, coordinating themselves in a scalable and possibly decentralised way. There is active research on OMOkC, particularly in the design of decentralised algorithms for solving it. We propose a new take on the issue: Rather than classically developing new algorithms, we apply a macro-level paradigm, called aggregate computing, specifically designed to directly program the global behaviour of a whole ensemble of devices at once. To understand the potential of the application of aggregate computing to OMOkC, we extend the Alchemist simulator (supporting aggregate computing natively) with a novel toolchain component supporting the simulation of mobile robots. This way, we build a software engineering toolchain comprising language and simulation tooling for addressing OMOkC. Finally, we exercise our approach and related toolchain by introducing new algorithms for OMOkC; we show that they can be expressed concisely, reuse existing software components and perform better than the current state-of-the-art in terms of coverage over time and number of objects covered overall
Cross-simulator integration: ns3 as a network simulation back-end for Alchemist
Innovative distributed systems are often studied with the aid of simulation, especially in the case of large scale and situated systems. One of the key aspects of distributed systems is the presence of a set of nodes which must communicate with each other in order to perform their collective task. Consequently, the behaviour of the network plays a key role in determining how the distributed system will act as a whole, but support for realistic simulation of network communication may not be available in simulators that focus on higher-level phenomena, such as the execution of a program on the nodes belonging to a distributed system. Network simulation is usually performed with dedicated simulators which, on the other hand, mostly focus on low-level aspects, such as the behaviour of the physical channels and of the network protocols. The present works aims at filling this gap between high-level distributed system simulation and low-level network simulation by creating a cross-simulator integration between Alchemist, a simulator for large scale situated distributed systems, and ns3, a network simulator, which has been exploited in order to give Alchemist the ability to accurately simulate the network interactions between the nodes. Finally, the whole system has been tested to demonstrate how different network setups can affect the execution of a program in a distributed system
Comparative Benchmarking of Multithreading Solutions for JVM Languages: the case of the Alchemist Simulator
Re-implementing sequential algorithms with parallelization support often leads to a tangible improvement in the time and throughput of the execution. Designing for multithreading is especially beneficial in the context of long and slow computations such as discrete event simulation, where even a relatively small increment in throughput may cut down simulation time by a significant cumulative margin. Discrete event simulation is a notoriously challenging domain to parallelize due to the complexity of the nature of the discrete events and the necessity to account for and resolve the causality conflicts. A truly deterministic solution is difficult and is not always possible for some of the more complex simulation models and domains. However, some compromises may be reached by relaxing the determinism constraints and adopting the so-called optimistic approach to conflict resolution, sacrificing some degree of predictability and determinism for performance.
Furthermore, when considering the goodness of a solution, a simple naive approach is not sufficient. The programming languages running on the \ac{jvm} have certain quirks and properties that must be taken into consideration to produce valuable and insightful results. Hence adopting a thorough method of testing and benchmarking is necessary.
The recent developments in the Java programming language have introduced novel solutions to structuring multithreaded code such as virtual threads that compete with a more mature analogous implementation found in the Kotlin programming language.
This thesis project explores the optimistic parallelization of a general discrete event simulator "Alchemist", building a robust benchmarking harness and testbed and comparing the results of equivalent implementations of the algorithm in traditional Java threads, the new Java virtual threads, and the consolidated Kotlin co-routines
Resilient Blocks for Summarising Distributed Data
Summarising distributed data is a central routine for parallel programming,
lying at the core of widely used frameworks such as the map/reduce paradigm. In
the IoT context it is even more crucial, being a privileged mean to allow
long-range interactions: in fact, summarising is needed to avoid data explosion
in each computational unit.
We introduce a new algorithm for dynamic summarising of distributed data,
weighted multi-path, improving over the state-of-the-art multi-path algorithm.
We validate the new algorithm in an archetypal scenario, taking into account
sources of volatility of many sorts and comparing it to other existing
implementations. We thus show that weighted multi-path retains adequate
accuracy even in high-variability scenarios where the other algorithms are
diverging significantly from the correct values.Comment: In Proceedings ALP4IoT 2017, arXiv:1802.0097
Engineering Resilient Collective Adaptive Systems by Self-Stabilisation
Collective adaptive systems are an emerging class of networked computational
systems, particularly suited in application domains such as smart cities,
complex sensor networks, and the Internet of Things. These systems tend to
feature large scale, heterogeneity of communication model (including
opportunistic peer-to-peer wireless interaction), and require inherent
self-adaptiveness properties to address unforeseen changes in operating
conditions. In this context, it is extremely difficult (if not seemingly
intractable) to engineer reusable pieces of distributed behaviour so as to make
them provably correct and smoothly composable.
Building on the field calculus, a computational model (and associated
toolchain) capturing the notion of aggregate network-level computation, we
address this problem with an engineering methodology coupling formal theory and
computer simulation. On the one hand, functional properties are addressed by
identifying the largest-to-date field calculus fragment generating
self-stabilising behaviour, guaranteed to eventually attain a correct and
stable final state despite any transient perturbation in state or topology, and
including highly reusable building blocks for information spreading,
aggregation, and time evolution. On the other hand, dynamical properties are
addressed by simulation, empirically evaluating the different performances that
can be obtained by switching between implementations of building blocks with
provably equivalent functional properties. Overall, our methodology sheds light
on how to identify core building blocks of collective behaviour, and how to
select implementations that improve system performance while leaving overall
system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio
A Reinforcement Learning approach to discriminate unsafe devices in aggregate computing systems
Reinforcement learning is a machine learning approach that has been studied for many years, but particularly nowadays the interest about this topic has exponentially grown. Its purpose is to create autonomous agents able to sense and act in their environment. They should learn to choose optimal actions to achieve their goals, in order to maximise a cumulative reward.
Aggregate programming is a paradigm that supports the large-scale programming of adaptive systems by focusing on the behaviour of the cluster instead of the singles. One promising aggregate programming approach is based on the field calculus, that allows the definition of aggregate programs by the functional composition of computational fields.
A topic of interest related to Aggregate Computing is computer security. Aggregate Computing systems are, in fact, vulnerable to security threats due to their distributed nature, situatedness and openness, which can make participant nodes leave and join the computation at any time.
A solution that enables to combine reinforcement learning, aggregate computing and security, would be an interesting and innovative approach, especially because there are no experiments so far that include this combination.
The goal of this thesis is to implement a Scala library for reinforcement learning, which must be easily integrated with the aggregate computing context. Starting from an existing work, on trust computation in aggregate applications, we want to train a network, via reinforcement learning, which through the calculation of the gradient -- a fundamental pattern of collective coordination -- is able to identify and discriminate compromised nodes.
The dissertation work focused on: 1. development of a generic Scala library that implements the reinforcement approach, in accord to an aggregate computing model; 2. development of a reinforcement learning based solution; 2. integration of the solution that allows us to calculate the trust gradient
- …