23 research outputs found

    A peer to peer approach to large scale information monitoring

    Get PDF
    Issued as final reportNational Science Foundation (U.S.

    The Price of Anarchy for Network Formation in an Adversary Model

    Full text link
    We study network formation with n players and link cost \alpha > 0. After the network is built, an adversary randomly deletes one link according to a certain probability distribution. Cost for player v incorporates the expected number of players to which v will become disconnected. We show existence of equilibria and a price of stability of 1+o(1) under moderate assumptions on the adversary and n \geq 9. As the main result, we prove bounds on the price of anarchy for two special adversaries: one removes a link chosen uniformly at random, while the other removes a link that causes a maximum number of player pairs to be separated. For unilateral link formation we show a bound of O(1) on the price of anarchy for both adversaries, the constant being bounded by 10+o(1) and 8+o(1), respectively. For bilateral link formation we show O(1+\sqrt{n/\alpha}) for one adversary (if \alpha > 1/2), and \Theta(n) for the other (if \alpha > 2 considered constant and n \geq 9). The latter is the worst that can happen for any adversary in this model (if \alpha = \Omega(1)). This points out substantial differences between unilateral and bilateral link formation

    Doing-it-All with Bounded Work and Communication

    Get PDF
    We consider the Do-All problem, where pp cooperating processors need to complete tt similar and independent tasks in an adversarial setting. Here we deal with a synchronous message passing system with processors that are subject to crash failures. Efficiency of algorithms in this setting is measured in terms of work complexity (also known as total available processor steps) and communication complexity (total number of point-to-point messages). When work and communication are considered to be comparable resources, then the overall efficiency is meaningfully expressed in terms of effort defined as work + communication. We develop and analyze a constructive algorithm that has work O(t+plog⁥p (plog⁥p+tlog⁥t ))O( t + p \log p\, (\sqrt{p\log p}+\sqrt{t\log t}\, ) ) and a nonconstructive algorithm that has work O(t+plog⁥2p)O(t +p \log^2 p). The latter result is close to the lower bound Ω(t+plog⁥p/log⁥log⁥p)\Omega(t + p \log p/ \log \log p) on work. The effort of each of these algorithms is proportional to its work when the number of crashes is bounded above by c pc\,p, for some positive constant c<1c < 1. We also present a nonconstructive algorithm that has effort O(t+p1.77)O(t + p ^{1.77})

    Rational Protocol Design: Cryptography Against Incentive-driven Adversaries

    Get PDF
    Existing work on “rational cryptographic protocols” treats each party (or coalition of parties) running the protocol as a selfish agent trying to maximize its utility. In this work we propose a fundamentally different approach that is better suited to modeling a protocol under attack from an external entity. Specifically, we consider a two-party game between an protocol designer and an external attacker. The goal of the attacker is to break security properties such as correctness or privacy, possibly by corrupting protocol participants; the goal of the protocol designer is to prevent the attacker from succeeding. We lay the theoretical groundwork for a study of cryptographic protocol design in this setting by providing a methodology for defining the problem within the traditional simulation paradigm. Our framework provides ways of reasoning about important cryptographic concepts (e.g., adaptive corruptions or attacks on communication resources) not handled by previous game-theoretic treatments of cryptography. We also prove composition theorems that—for the first time—provide a sound way to design rational protocols assuming “ideal communication resources” (such as broadcast or authenticated channels) and then instantiate these resources using standard cryptographic tools. Finally, we investigate the problem of secure function evaluation in our framework, where the attacker has to pay for each party it corrupts. Our results demonstrate how knowledge of the attacker’s incentives can be used t

    Practical Aggregation in the Edge

    Get PDF
    Due to the increasing amounts of data produced by applications and devices, cloud infrastructures are becoming unable to timely process and provide answers back to users. This has led to the emergence of the edge computing paradigm that aims at moving computations closer to end user devices. Edge computing can be defined as performing computations outside the boundaries of cloud data centres. This however, can be materialised across very different scenarios considering the broad spectrum of devices that can be leveraged to perform computations in the edge. In this thesis, we focus on a concrete scenario of edge computing, that of multiple devices with wireless capabilities that collectively form a wireless ad hoc network to perform distributed computations. We aim at devising practical solutions for these scenarios however, there is a lack of tools to help us in achieving such goal. To address this first limitation we propose a novel framework, called Yggdrasil, that is specifically tailored to develop and execute distributed protocols over wireless ad hoc networks on commodity devices. As to enable distributed computations in such networks, we focus on the particular case of distributed data aggregation. In particular, we address a harder variant of this problem, that we dub distributed continuous aggregation, where input values used for the computation of the aggregation function may change over time, and propose a novel distributed continuous aggregation protocol, called MiRAge. We have implemented and validated both Yggdrasil and MiRAge through an extensive experimental evaluation using a test-bed composed of 24 Raspberry Pi’s. Our results show that Yggdrasil provides adequate abstractions and tools to implement and execute distributed protocols in wireless ad hoc settings. Our evaluation is also composed of a practical comparative study on distributed continuous aggregation protocols, that shows that MiRAge is more robust and achieves more precise aggregation results than competing state-of-the-art alternatives

    Notes on Theory of Distributed Systems

    Full text link
    Notes for the Yale course CPSC 465/565 Theory of Distributed Systems

    Netzwerke und verteilte Operation: Der Preis der Anarchie in nicht-atomarem Routing und Netzwerkformierung

    Get PDF
    The price of anarchy is a measure for performance loss due to individuals operating in a distributed manner, i.e., without or with only limited central control or without or with only limited cooperation. In the first part, a network is given and used in a distributed manner, namely for multicast routing, i.e., each sender wishes to transmit a message to multiple receivers simultaneously. We prove almost tight upper and lower bounds on the price of anarchy and conduct an experimental study. In the second part, a network is formed. Each vertex of the (to-be-built) network may invest in the building of links. The formation takes place under the knowledge that an adversary will later delete exactly one link. Vertices wish to maintain good connectivity. We prove tight upper and lower bounds on the price of anarchy for two different adversaries.Der Preis der Anarchie ist ein Maß fĂŒr den Leistungsverlust dadurch, dass Individuen in verteilter Weise operieren, d.h. dass sie nicht oder nur eingeschrĂ€nkt koordiniert werden oder nicht oder nur eingeschrĂ€nkt kooperieren. Im ersten Teil wird ein gegebenes Netzwerk in verteilter Weise genutzt, und zwar fĂŒr Multicast-Routing, d.h. es möchte jeder Sender eine Mehrzahl EmpfĂ€nger gleichzeitig erreichen. Wir beweisen nahezu scharfe obere und untere Schranken fĂŒr den Preis der Anarchie und fĂŒhren eine experimentelle Studie durch. Im zweiten Teil geht es um die Formierung von Netzwerken. Jeder Knoten des (zu formierenden) Netzwerkes hat die Wahl, in Verbindungen zu investieren. Bei uns findet die Netzwerkformierung unter dem Wissen statt, dass spĂ€ter genau eine Verbindung durch einen Gegner zerstört werden wird. Die Knoten streben an, trotzdem gut verbunden zu bleiben. Wir beweisen scharfe obere und obere Schranken fĂŒr den Preis der Anarchie fĂŒr zwei verschiedene Formen von Gegnern

    Towards Scalable Synchronization on Multi-Cores

    Get PDF
    The shift of commodity hardware from single- to multi-core processors in the early 2000s compelled software developers to take advantage of the available parallelism of multi-cores. Unfortunately, only few---so-called embarrassingly parallel---applications can leverage this available parallelism in a straightforward manner. The remaining---non-embarrassingly parallel---applications require that their processes coordinate their possibly interleaved executions to ensure overall correctness---they require synchronization. Synchronization is achieved by constraining or even prohibiting parallel execution. Thus, per Amdahl's law, synchronization limits software scalability. In this dissertation, we explore how to minimize the effects of synchronization on software scalability. We show that scalability of synchronization is mainly a property of the underlying hardware. This means that synchronization directly hampers the cross-platform performance portability of concurrent software. Nevertheless, we can achieve portability without sacrificing performance, by creating design patterns and abstractions, which implicitly leverage hardware details without exposing them to software developers. We first perform an exhaustive analysis of the performance behavior of synchronization on several modern platforms. This analysis clearly shows that the performance and scalability of synchronization are highly dependent on the characteristics of the underlying platform. We then focus on lock-based synchronization and analyze the energy/performance trade-offs of various waiting techniques. We show that the performance and the energy efficiency of locks go hand in hand on modern x86 multi-cores. This correlation is again due to the characteristics of the hardware that does not provide practical tools for reducing the power consumption of locks without sacrificing throughput. We then propose two approaches for developing portable and scalable concurrent software, hence hiding the limitations that the underlying multi-cores impose. First, we introduce OPTIK, a new practical design pattern for designing and implementing fast and scalable concurrent data structures. We illustrate the power of our OPTIK pattern by devising five new algorithms and by optimizing four state-of-the-art algorithms for linked lists, skip lists, hash tables, and queues. Second, we introduce MCTOP, a multi-core topology abstraction which includes low-level information, such as memory bandwidths. MCTOP enables developers to accurately and portably define high-level optimization policies. We illustrate several such policies through four examples, including automated backoff schemes for locks, and illustrate the performance and portability of these policies on five platforms

    Reliable & Efficient Data Centric Storage for Data Management in Wireless Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSNs) have become a mature technology aimed at performing environmental monitoring and data collection. Nonetheless, harnessing the power of a WSN presents a number of research challenges. WSN application developers have to deal both with the business logic of the application and with WSN's issues, such as those related to networking (routing), storage, and transport. A middleware can cope with this emerging complexity, and can provide the necessary abstractions for the definition, creation and maintenance of applications. The final goal of most WSN applications is to gather data from the environment, and to transport such data to the user applications, that usually resides outside the WSN. Techniques for data collection can be based on external storage, local storage and in-network storage. External storage sends data to the sink (a centralized data collector that provides data to the users through other networks) as soon as they are collected. This paradigm implies the continuous presence of a sink in the WSN, and data can hardly be pre-processed before sent to the sink. Moreover, these transport mechanisms create an hotspot on the sensors around the sink. Local storage stores data on a set of sensors that depends on the identity of the sensor collecting them, and implies that requests for data must be broadcast to all the sensors, since the sink can hardly know in advance the identity of the sensors that collected the data the sink is interested in. In-network storage and in particular Data Centric Storage (DCS) stores data on a set of sensors that depend on a meta-datum describing the data. DCS is a paradigm that is promising for Data Management in WSNs, since it addresses the problem of scalability (DCS employs unicast communications to manage WSNs), allows in-network data preprocessing and can mitigate hot-spots insurgence. This thesis studies the use of DCS for Data Management in middleware for WSNs. Since WSNs can feature different paradigms for data routing (geographical routing and more traditional tree routing), this thesis introduces two different DCS protocols for these two different kinds of WNSs. Q-NiGHT is based on geographical routing and it can manage the quantity of resources that are assigned to the storage of different meta-data, and implements a load balance for the data storage over the sensors in the WSN. Z-DaSt is built on top of ZigBee networks, and exploits the standard ZigBee mechanisms to harness the power of ZigBee routing protocol and network formation mechanisms. Dependability is another issue that was subject to research work. Most current approaches employ replication as the mean to ensure data availability. A possible enhancement is the use of erasure coding to improve the persistence of data while saving on memory usage on the sensors. Finally, erasure coding was applied also to gossiping algorithms, to realize an efficient data management. The technique is compared to the state-of-the-art to identify the benefits it can provide to data collection algorithms and to data availability techniques

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum
    corecore