1,013 research outputs found

    D.1.3 – Protocols for emergent localities

    Get PDF
    GDD_HCERES2020This report presents two contributions that illustrate the potential of emerging-locality protocols in large-scale decentralized systems, in two areas of decentralized social computing: recommendation, and eventual consistency of mutable data structures. The first contribution consists of a framework supporting the development of dynamically adaptive decen-tralised recommendation systems. Decentralised recommenders have been proposed to deliver privacy-preserving, personalised and highly scalable on-line recommendations. Current implementations tend, however, to rely on a hard-wired similarity metric that cannot adapt. This constitutes a strong limitation in the face of evolving needs. Our framework address this through a decentralised form of adaptation, in which individual nodes can independently select, and update their own recommendation algorithm, while still collectively contributing to the overall system's mission. Our second contribution addresses the growing demand for differentiated consistency requirements in large-scale applications. A large number of today's applications rely on Eventual Consistency, a consistency model that emphasizes liveness over safety. Designers generally adopt this consistency model uniformly throughout a distributed system due to its ability to scale as the number of users or devices grows larger. But this clashes with the need for differentiated consistency requirements. In this contribution, we address this need by introducing UPS, a novel consistency mechanism that offers differentiated eventual consistency and delivery speed by working in pair with a two-phase epidemic broadcast protocol. We propose a closed-form analysis of our approach's delivery speed, and we evaluate our complete protocol experimentally on a simulated network of one million nodes. To measure the consistency trade-off, we formally define a novel and scalable consistency metric operating at runtime

    Distributed Computation in Dynamic Networks

    Get PDF
    In this report we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model is intended to capture mobile networks and wireless networks, in which mobility and interference render communication unpredictable. The model allows the study of the fundamental computation power of dynamic networks. In particular, it captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T-interval connectivity (for T >= 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. Algorithms for the dynamic graph model must cope with these unceasing changes. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any computable function of their initial inputs in O(n^2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T > 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n^2 / T) rounds using messages of size O(log n + d). We also give two lower bounds on the gossip problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. We show an Omega(n log k) bound on gossip in 1-interval connected graphs against centralized algorithms, and an Omega(n + nk / T) bound on exchanging k pieces of information in T-interval connected graphs for a restricted class of randomized distributed algorithms. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks

    Building Regular Registers with Rational Malicious Servers and Anonymous Clients

    Get PDF
    The paper addresses the problem of emulating a regular register in a synchronous distributed system where clients invoking read()\mathsf{read}() and write()\mathsf{write}() operations are anonymous while server processes maintaining the state of the register may be compromised by rational adversaries (i.e., a server might behave as rational malicious Byzantine process). We first model our problem as a Bayesian game between a client and a rational malicious server where the equilibrium depends on the decisions of the malicious server (behave correctly and not be detected by clients vs returning a wrong register value to clients with the risk of being detected and then excluded by the computation). We prove such equilibrium exists and finally we design a protocol implementing the regular register that forces the rational malicious server to behave correctly

    Distributed Hybrid Simulation of the Internet of Things and Smart Territories

    Full text link
    This paper deals with the use of hybrid simulation to build and compose heterogeneous simulation scenarios that can be proficiently exploited to model and represent the Internet of Things (IoT). Hybrid simulation is a methodology that combines multiple modalities of modeling/simulation. Complex scenarios are decomposed into simpler ones, each one being simulated through a specific simulation strategy. All these simulation building blocks are then synchronized and coordinated. This simulation methodology is an ideal one to represent IoT setups, which are usually very demanding, due to the heterogeneity of possible scenarios arising from the massive deployment of an enormous amount of sensors and devices. We present a use case concerned with the distributed simulation of smart territories, a novel view of decentralized geographical spaces that, thanks to the use of IoT, builds ICT services to manage resources in a way that is sustainable and not harmful to the environment. Three different simulation models are combined together, namely, an adaptive agent-based parallel and distributed simulator, an OMNeT++ based discrete event simulator and a script-language simulator based on MATLAB. Results from a performance analysis confirm the viability of using hybrid simulation to model complex IoT scenarios.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0487

    Ordering, timeliness and reliability for publish/subscribe systems over WAN

    Get PDF
    In the last few years, the increasing use of the Internet and geo-political, sociological and financial changes induced by globalization, are paving the way for a connected world where the information is always available at the right place and the right time. As such, applications previously deployed for ``closed'' environmets, are now federating into geographically distributed systems connected through a Wide Area Network (WAN). By this evolution, in the near future no system will be isolated: every system will be composed by interconnected systems, i.e., it will be a System of Systems (SoS). Example of SoS are the Large-scale Complex Critical Infrastructure (LCCIs), such as power grids, transport infrastructures (airports and seaports), financial infrastructures, next generation intelligence platforms, to cite a few. In these systems, multiple sources of information generate a high volume of events that need to be delivered to all intended destinations by respecting several Quality of Service (QoS) constraints imposed by the critical nature of LCCIs. As such, particular attention is devoted to the middleware solution used to disseminate information in the SoS. Due to its inherently scalability provided by space, time and synchronization decoupling properties, the publish/subscribe paradigm is becoming attractive for the implementation of a middleware service for LCCIs. However, scalability is not the only requirement exhibited by SoS. Several services need to control a broader set of QoS requirements, such as timeliness, ordering and reliability. Unfortunately, current middleware solutions do not address QoS constraints required by SoS. Current publish/subscribe middleware solutions for the WAN environment offer only a best effort event dissemination, with no additional control on QoS. Just a few implementations try to address some isolated QoS policy, making them not suitable for a SoS scenario. The contribution of this thesis is to devise a QoS layer that can be posed on top of a generic publish/subscribe middleware that enriches its service by addressing: (i) ordering, (ii) reliability and (iii) timeliness in event dissemination in SoS over WAN. Specifically, we first analyze several real case studies, by highlighting their QoS requirements in terms of ordering, reliability and timeliness, and compare these requirements with both current research prototypes and commercial systems. Then, we fill the gap by proposing novel algorithms to address those requirements. The proposed protocols can also be combined together in order to provide the QoS level required by the particular application. In this way, QoS issues do not need to be addressed at application level, so as to leave applications to implement just their native functionalities
    • …
    corecore