1,492 research outputs found
Engineering Resilient Collective Adaptive Systems by Self-Stabilisation
Collective adaptive systems are an emerging class of networked computational
systems, particularly suited in application domains such as smart cities,
complex sensor networks, and the Internet of Things. These systems tend to
feature large scale, heterogeneity of communication model (including
opportunistic peer-to-peer wireless interaction), and require inherent
self-adaptiveness properties to address unforeseen changes in operating
conditions. In this context, it is extremely difficult (if not seemingly
intractable) to engineer reusable pieces of distributed behaviour so as to make
them provably correct and smoothly composable.
Building on the field calculus, a computational model (and associated
toolchain) capturing the notion of aggregate network-level computation, we
address this problem with an engineering methodology coupling formal theory and
computer simulation. On the one hand, functional properties are addressed by
identifying the largest-to-date field calculus fragment generating
self-stabilising behaviour, guaranteed to eventually attain a correct and
stable final state despite any transient perturbation in state or topology, and
including highly reusable building blocks for information spreading,
aggregation, and time evolution. On the other hand, dynamical properties are
addressed by simulation, empirically evaluating the different performances that
can be obtained by switching between implementations of building blocks with
provably equivalent functional properties. Overall, our methodology sheds light
on how to identify core building blocks of collective behaviour, and how to
select implementations that improve system performance while leaving overall
system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio
Connectivity preserving network transformers
The Population Protocol model is a distributed model that concerns systems of
very weak computational entities that cannot control the way they interact. The
model of Network Constructors is a variant of Population Protocols capable of
(algorithmically) constructing abstract networks. Both models are characterized
by a fundamental inability to terminate. In this work, we investigate the
minimal strengthenings of the latter that could overcome this inability. Our
main conclusion is that initial connectivity of the communication topology
combined with the ability of the protocol to transform the communication
topology plus a few other local and realistic assumptions are sufficient to
guarantee not only termination but also the maximum computational power that
one can hope for in this family of models. The technique is to transform any
initial connected topology to a less symmetric and detectable topology without
ever breaking its connectivity during the transformation. The target topology
of all of our transformers is the spanning line and we call Terminating Line
Transformation the corresponding problem. We first study the case in which
there is a pre-elected unique leader and give a time-optimal protocol for
Terminating Line Transformation. We then prove that dropping the leader without
additional assumptions leads to a strong impossibility result. In an attempt to
overcome this, we equip the nodes with the ability to tell, during their
pairwise interactions, whether they have at least one neighbor in common.
Interestingly, it turns out that this local and realistic mechanism is
sufficient to make the problem solvable. In particular, we give a very
efficient protocol that solves Terminating Line Transformation when all nodes
are initially identical. The latter implies that the model computes with
termination any symmetric predicate computable by a Turing Machine of space
Improving Dependability of Networks with Penalty and Revocation Mechanisms
Both malicious and non-malicious faults can dismantle computer networks. Thus, mitigating faults at various layers is essential in ensuring efficient and fair network resource utilization. In this thesis we take a step in this direction and study several ways to deal with faults by means of penalties and revocation mechanisms in networks that are lacking a centralized coordination point, either because of their scale or design.
Compromised nodes can pose a serious threat to infrastructure, end-hosts and services. Such malicious elements can undermine the availability and fairness of networked systems. To deal with such nodes, we design and analyze protocols enabling their removal from the network in a fast and a secure way. We design these protocols for two different environments. In the former setting, we assume that there are multiple, but independent trusted points in the network which coordinate other nodes in the network. In the latter, we assume that all nodes play equal roles in the network and thus need to cooperate to carry out common functionality. We analyze these solutions and discuss possible deployment scenarios.
Next we turn our attention to wireless edge networks. In this context, some nodes, without being malicious, can still behave in an unfair manner. To deal with the situation, we propose several self-penalty mechanisms. We implement the proposed protocols employing a commodity hardware and conduct experiments in real-world environments. The analysis of data collected in several measurement rounds revealed improvements in terms of higher fairness and throughput. We corroborate the results with simulations and an analytic model. And finally, we discuss how to measure fairness in dynamic settings, where nodes can have heterogeneous resource demands
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
- …