10 research outputs found
Simulation of Large Scale Computational Ecosystems with Alchemist: A Tutorial
Many interesting systems in several disciplines can be modeled as networks of nodes that can store and exchange data: pervasive systems, edge computing scenarios, and even biological and bio-inspired systems. These systems feature inherent complexity, and often simulation is the preferred (and sometimes the only) way of investigating their behavior; this is true both in the design phase and in the verification and testing phase. In this tutorial paper, we provide a guide to the simulation of such systems by leveraging Alchemist, an existing research tool used in several works in the literature. We introduce its meta-model and its extensible architecture; we discuss reference examples of increasing complexity; and we finally show how to configure the tool to automatically execute multiple repetitions of simulations with different controlled variables, achieving reliable and reproducible results
Time-fluid field-based coordination
Emerging application scenarios, such as cyber-physical systems (CPSs), the Internet of Things (IoT), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. Field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in “space and time” of distributed data structures, called fields. More specifically, regarding time, in field-based coordination it is assumed that local activities in each device, called computational rounds, are regulated by a fixed clock, typically, a fair and unsynchronized distributed scheduler. In this work, we challenge this assumption, and propose an alternative approach where the round execution scheduling is naturally programmed along with the usual coordination specification, namely, in terms of a field of causal relations dictating what is the notion of causality (why and when a round has to be locally scheduled) and how it should change across time and space. This abstraction over the traditional view on global time allows us to express what we call “time-fluid” coordination, where causality can be finely tuned to select the event triggers to react to, up to to achieve improved balance between performance (system reactivity) and cost (usage of computational resources). We propose an implementation in the aggregate computing framework, and evaluate via simulation on a case study
Time-fluid field-based coordination
Emerging application scenarios, such as cyber-physical systems (CPSs), the Internet of Things (IoT), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. Field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in \u201cspace and time\u201d of distributed data structures, called fields. More specifically, regarding time, in field-based coordination it is assumed that local activities in each device, called computational rounds, are regulated by a fixed clock, typically, a fair and unsynchronized distributed scheduler. In this work, we challenge this assumption, and propose an alternative approach where the round execution scheduling is naturally programmed along with the usual coordination specification, namely, in terms of a field of causal relations dictating what is the notion of causality (why and when a round has to be locally scheduled) and how it should change across time and space. This abstraction over the traditional view on global time allows us to express what we call \u201ctime-fluid\u201d coordination, where causality can be finely tuned to select the event triggers to react to, up to to achieve improved balance between performance (system reactivity) and cost (usage of computational resources). We propose an implementation in the aggregate computing framework, and evaluate via simulation on a case study
Self-stabilising Priority-Based Multi-Leader Election and Network Partitioning
A common task in situated distributed systems is the self-organising election of leaders. These leaders can be devices or software agents appointed, for instance, to coordinate the activities of other agents or processes. In this work, we focus on the multi-leader election problem in networks of asynchronous message-passing devices, which are a common model in self-organisation approaches like aggregate computing. Specifically, we introduce a novel algorithm for space- and priority-based leader election and compare it with the state of the art. We call the algorithm Bounded Election since it leverages bounding (i.e. minimisation or maximisation) of candidacy messages to drop or promote candidate leaders and ensure stabilisation. The proposed algorithm is formally proven to be self-stabilising, allows for leader prioritisation, and performs on-the-fly network partitioning (namely, as a side effect of the leader election process, the areas regulated by the leaders are also established). Also, we experimentally compare its performance together with the state of the art of leader election in aggregate computing in a variety of synthetic scenarios, showing benefits in terms of convergence time and resilience
Internet of Things for Mental Health: Open Issues in Data Acquisition, Self-Organization, Service Level Agreement, and Identity Management
The increase of mental illness cases around the world can be described as an urgent
and serious global health threat. Around 500 million people suffer from mental disorders, among
which depression, schizophrenia, and dementia are the most prevalent. Revolutionary technological
paradigms such as the Internet of Things (IoT) provide us with new capabilities to detect, assess,
and care for patients early. This paper comprehensively survey works done at the intersection
between IoT and mental health disorders. We evaluate multiple computational platforms, methods
and devices, as well as study results and potential open issues for the effective use of IoT systems
in mental health. We particularly elaborate on relevant open challenges in the use of existing IoT
solutions for mental health care, which can be relevant given the potential impairments in some
mental health patients such as data acquisition issues, lack of self-organization of devices and service
level agreement, and security, privacy and consent issues, among others. We aim at opening the
conversation for future research in this rather emerging area by outlining possible new paths based
on the results and conclusions of this work.Consejo Nacional de Ciencia y Tecnologia (CONACyT)Sonora Institute of Technology (ITSON) via the PROFAPI program
PROFAPI_2020_0055Spanish Ministry of Science, Innovation and Universities (MICINN) project "Advanced Computing Architectures and Machine Learning-Based Solutions for Complex Problems in Bioinformatics, Biotechnology and Biomedicine"
RTI2018-101674-B-I0
Addressing Collective Computations Efficiency: Towards a Platform-level Reinforcement Learning Approach
Aggregate Computing is a macro-level approach for programming collective intelligence and self-organisation in distributed systems. In this paradigm, system behaviour unfolds as a combination of a system-wide program, functionally manipulating distributed data structures called computational fields, and a distributed protocol where devices work at asynchronous rounds comprising sense-compute-interact steps. Interestingly, there exists a large amount of flexibility in how aggregate systems could actually execute while preserving the desired functionality. The ideal place for making choices about execution is the aggregate computing platform (or middleware), which can be engineered with the goal of promoting efficiency and other non-functional goals. In this work, we explore the possibility of applying Reinforcement Learning at the platform level in order to optimise aspects of a collective computation while achieving coherent functional goals. This idea is substantiated through synthetic experiments of data propagation and collection, where we show how Q-Learning could reduce the power consumption of aggregate computations
A Collective Adaptive Approach to Decentralised k-Coverage in Multi-robot Systems
We focus on the online multi-object k-coverage problem (OMOkC), where mobile robots are required to sense a mobile target from k diverse points of view, coordinating themselves in a scalable and possibly decentralised way. There is active research on OMOkC, particularly in the design of decentralised algorithms for solving it. We propose a new take on the issue: Rather than classically developing new algorithms, we apply a macro-level paradigm, called aggregate computing, specifically designed to directly program the global behaviour of a whole ensemble of devices at once. To understand the potential of the application of aggregate computing to OMOkC, we extend the Alchemist simulator (supporting aggregate computing natively) with a novel toolchain component supporting the simulation of mobile robots. This way, we build a software engineering toolchain comprising language and simulation tooling for addressing OMOkC. Finally, we exercise our approach and related toolchain by introducing new algorithms for OMOkC; we show that they can be expressed concisely, reuse existing software components and perform better than the current state-of-the-art in terms of coverage over time and number of objects covered overall
Una piattaforma client-server universale per Aggregate Computing
L'aggregate computing è un approccio emergente utilizzato per progettare sistemi di coordinazione complessi edistribuiti. Il tutto si basa sul Field Calculus, un modello di programmazione universale utile per specificare computazioni aggregate tramite composizione di comportamenti. Tra i linguaggi ed i framework sviluppati figurano Protelis e Scafi, i quali, utilizzando approcci diversi tra di loro, forniscono implementazioni concrete del Field Calculus, permettendo di definire programmi aggregati e di mandarli in esecuzione. Avendo a disposizione diversi framework per specificare le computazioni da svolgere, si è rivelato necessario avere una piattaforma sulla quale poter eseguire tali programmi. La tesi verte sulla progettazione e conseguente implementazione di un sistema basatosu architettura client-server che permetta l’esecuzione di programmi aggregati su unarete logica di dispositivi. È prevista la possibilità di utilizzare dispositivi virtualizzati e/o reali per formare la rete che eseguirà il programma. È prevista anche la possibilità per un client dientrare in modalità “lightweight”, nella quale non sarà più quest’ultimo ad eseguire, ma sarà il server a farsi carico della sua parte, indicando al client solamente i risultati delle computazioni. Indipendentemente dalla modalità di esecuzione, il sistema è nativamente compatibile con Scafi e Protelis ed è aperto a nuove implementazioni, permettendo quindi l’esecuzione di programmi aggregati indipendentemente dal linguaggio o framework utilizzato per definirli ed eseguirli. L’intero progetto è stato sviluppato adottando una metodologia test-driven. La compatibilità del sistema con Scafi e Protelis è stata verificata testandolo con alcuni programmi di esempio forniti dagli autori dei framework stessi. Tutti i test sono stati continuamente verificati attuando un processo di continuous integration, permettendo così di individuare facilmente eventuali problematiche durante lo sviluppo
Time-Fluid Field-Based Coordination through Programmable Distributed Schedulers
Emerging application scenarios, such as cyber-physical systems (CPSs), the
Internet of Things (IoT), and edge computing, call for coordination approaches
addressing openness, self-adaptation, heterogeneity, and deployment
agnosticism. Field-based coordination is one such approach, promoting the idea
of programming system coordination declaratively from a global perspective, in
terms of functional manipulation and evolution in "space and time" of
distributed data structures called fields. More specifically regarding time, in
field-based coordination (as in many other distributed approaches to
coordination) it is assumed that local activities in each device are regulated
by a fair and unsynchronised fixed clock working at the platform level. In this
work, we challenge this assumption, and propose an alternative approach where
scheduling is programmed in a natural way (along with usual field-based
coordination) in terms of causality fields, each enacting a programmable
distributed notion of a computation "cause" (why and when a field computation
has to be locally computed) and how it should change across time and space.
Starting from low-level platform triggers, such causality fields can be
organised into multiple layers, up to high-level, collectively-computed time
abstractions, to be used at the application level. This reinterpretation of
time in terms of articulated causality relations allows us to express what we
call "time-fluid" coordination, where scheduling can be finely tuned so as to
select the triggers to react to, generally allowing to adaptively balance
performance (system reactivity) and cost (resource usage) of computations. We
formalise the proposed scheduling framework for field-based coordination in the
context of the field calculus, discuss an implementation in the aggregate
computing framework, and finally evaluate the approach via simulation on
several case studies
Gestão de micro-serviços na Cloud e Edge
O aumento do número de dispositivos móveis nos últimos anos tem elevado o número
de pedidos realizados aos serviços de backend da cloud, bem como a quantidade de dados
produzida. Este facto tem levado à utilização de novas arquiteturas no desenvolvimento
dos sistemas e à necessidade de novas estratégias para garantir a qualidade dos serviços.
A arquitetura de micro-serviços, na linha de “Service Oriented Architecture and Computing”
(SOA/SOC), permite o desenvolvimento independente de pequenos serviços, cada
um implementando uma dada funcionalidade, com uma interface bem definida e acessível
através da rede. Serviços com funcionalidades mais complexas resultam da comunicação
entre os micro-serviços, em que cada um recorre aos serviços de outros. Esta
arquitetura permite o deployment independente de cada serviço com configuração individual
dos recursos (ex.: CPU, RAM), bem como o seu escalonamento independente
(múltiplas instâncias por serviço). O tamanho reduzido de cada serviço permite também
o seu deployment em arquiteturas heterogéneas de computação, como a cloud e a edge.
A heterogeneidade dos locais de deployment considerados, ou seja, a cloud e a edge,
torna complexa a gestão dos micro-serviços, em particular a migração/replicação dos serviços.
É necessário decidir quando se processa a migração/replicação de um dado serviço,
e para que local, sendo depois também necessário decidir como se processa essa migração/
replicação. Ao existirem vários micro-serviços, em que pode haver dependências
entre eles, a sua gestão é mais complexa, bem como a decisão sobre as suas dependências.
A solução consiste num protótipo aplicacional com mecanismos automáticos de migração
e replicação de micro-serviços na cloud e na edge, que permite uma diminuição
no tempo de acesso a esses serviços, resultando num melhor desempenho aplicacional.
Estes mecanismos possibilitam o deployment de micro-serviços automaticamente na cloud
e edge consoante certas regras e métricas configuráveis (ex.: latência, número de acessos).
A avaliação realizada permitiu comprovar que a utilização da cloud e da edge para a
execução dos serviços permitiu uma diminuição dos tempos de acessos aos mesmos, em
comparação à utilização apenas da cloud