11 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Recommended from our members
Information dissemination via random walks
Information dissemination is a fundamental task in distributed computing:
How to deliver a piece of information from a node of a network to some or all other nodes?
In the face of large and still growing modern networks, it is imperative that dissemination algorithms are decentralised and can operate under unreliable conditions.
In the past decades, randomised rumour spreading algorithms
have addressed these challenges.
In these algorithms, a message is initially placed at a source node of a network, and, at regular intervals, each node contacts a randomly selected neighbour.
A message may be transmitted in one or both directions during each of these communications, depending on the exact protocol.
The main measure of performance for these algorithms is their broadcast time, which is the time until a message originating from a source node is disseminated to all nodes of the network.
Apart from being extremely simple and robust to failures, randomised rumour spreading achieves theoretically optimal broadcast time in many common network topologies.
In this thesis, we propose an agent-based information dissemination algorithm, called Visit-Exchange.
In our protocol, a number of agents perform independent random walks in the network.
An agent becomes informed when it visits a node that has a message, and later informs all future nodes it visits.
Visit-Exchange shares many of the properties of randomised rumour spreading, namely, it is very simple and uses the same amount of communication in a unit of time.
Moreover, the protocol can be used as a simple model of non-recoverable epidemic processes.
We investigate the broadcast time of Visit-Exchange on a variety of network topologies, and compare it to traditional rumour spreading.
On dense regular networks we show that the two types of protocols are equivalent, which means that in this setting the vast literature on randomised rumour spreading applies in our model as well.
Since many networks of interest, including real-world ones, are very sparse, we also study agent-based broadcast for sparse networks.
Our results include almost optimal or optimal bounds for sparse regular graphs, expanders, random regular graphs, balanced trees and grids.
We establish that depending on the network topology, Visit-Exchange may be either slower or faster than traditional rumour spreading.
In particular, in graphs consisting of hubs that are not well connected, broadcast using agents can be significantly faster.
Our conclusion is that a combined broadcasting protocol that simultaneously uses both traditional rumour spreading and agent-based dissemination can be fast on a larger range of topologies than each of its components separately.Gates Cambridge Trust, St John's College Benefactors' Scholarshi
Equilibria in the Tangle
We analyse the Tangle --- a DAG-valued stochastic process where new vertices
get attached to the graph at Poissonian times, and the attachment's locations
are chosen by means of random walks on that graph. These new vertices, also
thought of as "transactions", are issued by many players (which are the nodes
of the network), independently. The main application of this model is that it
is used as a base for the IOTA cryptocurrency system (www.iota.org). We prove
existence of "almost symmetric" Nash equilibria for the system where a part of
players tries to optimize their attachment strategies. Then, we also present
simulations that show that the "selfish" players will nevertheless cooperate
with the network by choosing attachment strategies that are similar to the
"recommended" one.Comment: 33 pages, 11 figure
Topology-dependent density optima for efficient simultaneous network exploration
A random search process in a networked environment is governed by the time it takes to visit every node, termed the cover time. Often, a networked process does not proceed in isolation but competes with many instances of itself within the same environment. A key unanswered question is how to optimise this process: how many concurrent searchers can a topology support before the benefits of parallelism are outweighed by competition for space? Here, we introduce the searcher-averaged parallel cover time (APCT) to quantify these economies of scale. We show that the APCT of the networked symmetric exclusion process is optimised at a searcher density that is well predicted by the spectral gap. Furthermore, we find that non-equilibrium processes, realised through the addition of bias, can support significantly increased density optima. Our results suggest novel hybrid strategies of serial and parallel search for efficient information gathering in social interaction and biological transport networks.This work was supported by the EPSRC Systems Biology DTC Grant No. EP/G03706X/1 (D.B.W.), a Royal Society Wolfson Research Merit Award (R.E.B.), a Leverhulme Research Fellowship (R.E.B.), the BBSRC UK Multi-Scale Biology Network Grant No. BB/M025888/1 (R.E.B. and F.G.W.), and Trinity College, Cambridge (F.G.W.)
Efficient collection of sensor data via a new accelerated random walk
Motivated by the problem of efficiently collecting data from wireless sensor networks via a mobile sink, we present an accelerated random walk on random geometric graphs (RGG). Random walks in wireless sensor networks can serve as fully local, lightweight strategies for sink motion that significantly reduce energy dissipation but introduce higher latency in the data collection process. In most cases, random walks are studied on graphs like Gn,p and grid. Instead, we here choose the RGG model, which abstracts more accurately spatial proximity in a wireless sensor network. We first evaluate an adaptive walk (the random walk with inertia) on the RGG model; its performance proved to be poor and led us to define and experimentally evaluate a novel random walk that we call Îł-stretched random walk. Its basic idea is to favour visiting distant neighbours of the current node towards reducing node overlap and accelerate the cover time. We also define a new performance metric called proximity cover time that, along with other metrics such as visit overlap statistics and proximity variation, we use to evaluate the performance properties and features of the various walks
On deterministic counting in anonymous dynamic networks
Nella tesi di dottorato si analizza il problema del counting in reti anonime dinamiche ed interval connesse. Vengono dimostrati lower bound non triviali sul tempo di conteggio in reti a diametro costante. Inoltre vengono sviluppati nuovi algoritmi di conteggio.Counting is a fundamental problem of every distributed system as it represents a basic building block to implement high level abstractions [2,4,6]. We focus on deterministic counting algorithms, that is we assume that no source of randomness is available to processes. We consider a dynamic system where processes do not leave the compu- tation while there is an adversary that continuously changes the communication graph connecting such processes. The adversary is only constrained to maintain at each round a connected topology, i.e. 1-interval connectivity G(1-IC) [3]. In such environment, it has been shown, [5], that counting cannot be solved without a leader. Therefore, we assume that all processes are anonymous but the distinguished leader.
In the thesis we will discuss bounds and algorithms for counting in the aforementioned framework. Our bounds are obtained investigating networks where the distance between the leader and an anonymous process is persistent across rounds and is at most h, we denote such networks as G(PD)h [1]. Interestingly we will show that counting in G(PD)2 requires Ί(log |V |) rounds even when the bandwidth is unlimited. This implies that counting in networks with constant dynamic diameter requires a number of rounds that is function of the network size. We will discuss other results concerning the accuracy of counting algorithms.
For the possibility side we will show an optimal counting algorithm for G(PD)h networks and a counting algorithm for G(1-IC) networks
Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)
The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..
Scaling limits of random walks and their related parameters on critical random trees and graphs
In this thesis we study random walks in random environments, a major area in Probability theory. Within this broad topic, we are mainly focused in studying scaling limits of random walks on random graphs at criticality, that is precisely when we witness the emergence of a giant component that has size proportional to the number of vertices of the graph. Critical random graphs of interest include critical Galton-Watson trees and maximal components that belong to the Erd}os- R_enyi universality class.
The first part of the thesis expands upon using analytic and geometric properties of those random graphs to establish distributional convergence of certain graph parameters, such as the blanket time. Our contribution refines the previous existing results on the order of the mean blanket time. The study of this problem can be seen as a stepping stone to deal with the more delicate problem of establishing convergence in distribution of the rescaled cover times of the discrete-time walks in each of the applications of our main result.
Relying on powerful resistance techniques developed in recent years, another part of the thesis investigates random walks in random enviroments on tree-like spaces and their scaling limits in a certain regime, that is when the potential of the random walk in random environment converges. Results include novel scaling continuum limits of a biased random walk on large critical branching random walk and a self-reinforced discrete process on size-conditioned critical Galton-Watson trees. In both cases the diffusions that are not on natural scale are identified as Brownian motions on a continuum random fractal tree with its natural metric replaced by a distorted resistance metric
Majority Problems in Distributed Systems and Clustering in Structured Graphs
This thesis focuses on the study of various algorithms for Distributed Computing and Machine Learning research areas. More precisely, the work within contains research into various communication protocols in different settings of Distributed Computing, accompanied by relevant analysis on protocol performance in time and space. These protocols are designed to operate in analogous environments using different models for communication, primarily population protocol and random walk variants. In our settings we aim to use as minimal memory as possible, achieving light weight protocols that are powerful in their capabilities and randomized as well as deterministic in nature. We also propose a novel technique of verification which enables multi-step protocols to work in synergy. These protocols generally never terminate, but converge and are difficult to disseminate results throughout the network to be used in dependent processes. With the verification technique proposed, protocols can become adaptive and stacked into a chain of dependent processes. We also provide experimental analysis of a subarea of Machine Learning, unsupervised clustering algorithms. Gaining inspiration from the agglomerative nature and techniques defined in classical hierarchical clustering as well as the Phylogenetic tree building methods, we provide a comprehensive study and evaluation of new method to agglomeratively combine `similar' data into clusters based on the general consensus of taxonomy and evaluation of clustering mechanisms