6 research outputs found

    On the Stability of Distribution Topologies in Peer-to-Peer Live Streaming Systems

    Get PDF
    Peer-to-Peer Live-Streaming-Systeme sind ständigen Störungen ausgesetzt.Insbesondere ermöglichen unzuverlässige Teilnehmer Ausfälle und Angriffe, welche überraschend Peers aus dem System entfernen. Die Folgen solcher Vorfälle werden großteils von der Verteilungstopologie bestimmt, d.h. der Kommunikationsstruktur zwischen den Peers.In dieser Arbeit analysieren wir Optimierungsprobleme welche bei der Betrachtung von Stabilitätsbegriffen für solche Verteilungstopologien auftreten. Dabei werden sowohl Angriffe als auch unkoordinierte Ausfälle berücksichtigt.Zunächst untersuchen wir die Berechnungskomplexität und Approximierbarkeit des Problems resourcen-effiziente Angriffe zu bestimmen. Dies demonstriert Beschränkungen in den Planungsmöglichkeiten von Angreifern und zeigt inwieweit die Topologieparameter die Schwierigkeit solcher Angriffsrobleme beeinflussen. Anschließend studieren wir Topologieformationsprobleme. Dabei sind Topologieparameter vorgegeben und es muss eine passende Verteilungstopologie gefunden werden. Ziel ist es Topologien zu erzeugen, welche den durch Angriffe mit beliebigen Parametern erzeugbaren maximalen Schaden minimieren.Wir identifizieren notwendige und hinreichende Eigenschaften solcher Verteilungstopologien. Dies führt zu mathematisch fundierten Zielstellungen für das Topologie-Management von Peer-to-Peer Live-Streaming-Systemen.Wir zeigen zwei große Klassen effizient konstruierbarer Verteilungstopologien, welche den maximal möglichen, durch Angriffe verursachten Paketverlust minimieren. Zusätzlich beweisen wir, dass die Bestimmung dieser Eigenschaft für beliebige Topologien coNP-vollständig ist.Soll die maximale Anzahl von Peers minimiert werden, bei denen ein Angriff zu ungenügender Stream-Qualität führt, ändern sich die Anforderungen an Verteilungstopologien. Wir zeigen, dass dieses Topologieformationsproblem eng mit offenen Problemen aus Design- und Kodierungstheorie verwandt ist.Schließlich analysieren wir Verteilungstopologien die den durch unkoordinierte Ausfälle zu erwartetenden Paketverlust minimieren. Wir zeigen Eigenschaften und Existenzbedingungen. Außerdem bestimmen wir die Berechnungskomplexität des Auffindens solcher Topologien. Unsere Ergebnisse liefern Richtlinien für das Topologie-Management von Peer-to-Peer Live-Streaming-Systemen und zeigen auf, welche Stabilitätsziele effizient erreicht werden können.The stability of peer-to-peer live streaming systems is constantly challenged. Especially, the unreliability and vulnerability of their participants allows for failures and attacks suddenly disabling certain sets of peers. The consequences of such events are largely determined by the distribution topology, i.e., the pattern of communication between the peers.In this thesis, we analyze a broad range of optimization problems concerning the stability of distribution topologies. For this, we discuss notions of stability against both attacks and failures.At first, we investigate the computational complexity and approximability of finding resource-efficient attacks. This allows to point out limitations of an attacker's planning capabilities and demonstrates the influence of the chosen system parameters on the hardness of such attack problems.Then, we turn to study topology formation problems. Here, a set of topology parameters is given and the task consists in finding an eligible distribution topology. In particular, it has to minimize the maximum damage achievable by attacks with arbitrary attack parameters.We identify necessary and sufficient conditions on attack-stable distribution topologies. Thereby, we give mathematically sound guidelines for the topology management of peer-to-peer live streaming systems.We find large classes of efficiently-constructable topologies minimizing the system-wide packet loss under attacks. Additionally, we show that determining this feature for arbitrary topologies is coNP-complete.Considering topologies minimizing the maximum number of peers for which an attack leads to a heavy decrease in perceived streaming quality, the requirements change. Here, we show that the corresponding topology formation problem is closely related to long-standing open problems of Design and Coding Theory.Finally, we study topologies minimizing the expected packet loss due to uncoordinated peer failures. We investigate properties and existence conditions of such topologies. Furthermore, we determine the computational complexity of constructing them.Our results provide guidelines for the topology management of peer-to-peer live streaming systems and mathematically determine which goals can be achieved efficiently

    Robustness to failures in two-layer communication networks

    Get PDF
    A close look at many existing systems reveals their two- or multi-layer nature, where a number of coexisting networks interact and depend on each other. For instance, in the Internet, any application-level graph (such as a peer-to-peer network) is mapped on the underlying IP network that, in turn, is mapped on a mesh of optical fibers. This layered view sheds new light on the tolerance to errors and attacks of many complex systems. What is observed at a single layer does not necessarily reflect well the state of the entire system. On the contrary, a tiny, seemingly harmless disruption of one layer, may destroy a substantial or essential part of another layer, thus making the whole system useless in practice. In this thesis we consider such two-layer systems. We model them by two graphs at two different layers, where the upper-layer (or logical) graph is mapped onto the lower-layer (physical) graph. Our main goals are the following. First, we study the robustness to failures of existing large-scale two-layer systems. This brings us some valuable insights into the problem, e.g., by identifying common weak points in such systems. Fortunately, these two-layer problems can often be effectively alleviated by a careful system design. Therefore, our second major goal is to propose new designs that increase the robustness of two-layer systems. This thesis is organized in three main parts, where we focus on different examples and aspects of the two-layer system. In the first part, we turn our attention to the existing large-scale two-layer systems, such as peer-to-peer networks, railway networks and the human brain. Our main goal is to study the vulnerability of these systems to random errors and targeted attacks. Our simulations show that (i) two-layer systems are much more vulnerable to errors and attacks than they appear from a single layer perspective, and (ii) attacks are much more harmful than errors, especially when the logical topology is heterogeneous. These results hold across all studied systems. A natural next step consists in improving the failure robustness of two-layer systems. In particular, in the second part of this thesis, we consider the IP/WDM optical networks, where an IP backbone network is mapped on a mesh of optical fibers. The problem lies in designing a survivable mapping, such that no single physical failure disconnects the logical topology. This is an NP-complete problem. We introduce a new concept of piecewise survivability, which makes the problem much easier in practice. This leads us to an efficient and scalable algorithm called SMART, which finds a survivable mapping much faster (often by orders of magnitude) than the other approaches proposed to date. Moreover, the formal analysis of SMART allows us to prove that a given survivable mapping does or does not exist. Finally, this approach helps us to find vulnerable areas in the system, and to effectively reinforce them, e.g., by adding new links. In the third part of this thesis, we shift our attention one layer higher, to the application-over-IP setting. In particular, we consider the design of Application-Level Multicast (ALM) for interactive applications, where a single source sends a delay-constrained data stream to a number of destinations. Interactive ALM should (i) respect stringent delay requirements, and (ii) proactively protect the system against overlay node failures and against (iii) the packet losses at the IP layer. We propose a two-layer-aware approach to this problem. First, we prove that the average packet loss rate observed at the destinations can be effectively approximated by a purely topological metric that, in turn, drops with the amount of IP-level and overlay-level path diversity available in the system. Therefore, we propose a framework that accommodates and generalizes various techniques to increase the path diversity in the system. Within this framework we optimize the structure of ALM. As a result, we reduce the effective loss rate of real Internet topologies by typically 30%-70%, compared to the state of the art. Finally, in addition to the three main parts of the thesis, we also present a set of results inspired by the study of ALM systems, but not directly related to the 'two-layer' paradigm (and thus moved to the Appendix). In particular, we consider a transmission of a delay-sensitive data stream from a single source to a single destination, where the data packets are protected by a Forward Error Correction (FEC) code and sent over multiple paths. We show that the performance of such a scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling, which results in a two- to five-fold improvement (reduction) in the effective loss rate

    Live Streaming with Gossip

    Get PDF
    Peer-to-peer (P2P) architectures have emerged as a popular paradigm to support the dynamic and scalable nature of distributed systems. This is particularly relevant today, given the tremendous increase in the intensity of information exchanged over the Internet. A P2P system is typically composed of participants that are willing to contribute resources, such as memory or bandwidth, in the execution of a collaborative task providing a benefit to all participants. File sharing is probably the most widely used collaborative task, where each participant wants to receive an individual copy of some file. Users collaborate by sending fragments of the file they have already downloaded to other participants. Sharing files containing multimedia content, files that typically reach the hundreds of megabytes to gigabytes, introduces a number of challenges. Given typical bandwidths of participants of hundreds of kilobits per second to a couple of megabits per second, it is unacceptable to wait until completion of the download before actually being able to use the file as the download represents a non negligible time. From the point of view of the participant, getting the (entire) file as fast as possible is typically not good enough. As one example, Video on Demand (VoD) is a scenario where a participant would like to start previewing the multimedia content (the stream), offered by a source, even though only a fraction of it has been received, and then continue the viewing while the rest of the content is being received. Following the same line of reasoning, new applications have emerged that rely on live streaming: the source does not own a file that it wants to share with others, but shares content as soon as it is produced. In other words, the content to distribute is live, not pre-recorded and stored. Typical examples include the broadcasting of live sports events, conferences or interviews. The gossip paradigm is a type of data dissemination that relies on random communication between participants in a P2P system, sharing similarities with the epidemic dissemination of diseases. An epidemic starts to spread when the source randomly chooses a set of communication partners, of size fanout, and infects them, i.e., it shares a rumor with them. This set of participants, in turn, randomly picks fanout communication partners each and infects them, i.e., share with them the same rumor. This paradigm has many advantages including fast propagation of rumors, a probabilistic guarantee that each rumor reaches all participants, high resilience to churn (i.e., participants that join and leave) and high scalability. Gossip therefore constitutes a candidate of choice for live streaming in large-scale systems. These advantages, however, come at a price. While disseminating data, gossip creates many duplicates of the same rumor and participants usually receive multiple copies of the same rumor. While this is obviously a feature when it comes to guaranteeing good dissemination of the rumor when churn is high, it is a clear disadvantage when spreading large amounts of multimedia data (i.e., ordered and time-critical) to participants with limited resources, namely upload bandwidth in the case of high-bandwidth content dissemination. This thesis therefore investigates if and how the gossip paradigm can be used as a highly effcient communication system for live streaming under the following specific scenarios: (i) where participants can only contribute limited resources, (ii) when these limited resources are heterogeneously distributed among nodes, and (iii) where only a fraction of participants are contributing their fair share of work while others are freeriding. To meet these challenges, this thesis proposes (i) gossip++: a gossip-based protocol especially tailored for live streaming that separates the dissemination of metadata, i.e., the location of the data, and the dissemination of the data itself. By first spreading the location of the content to interested participants, the protocol avoids wasted bandwidth in sending and receiving duplicates of the payload, (ii) HEAP: a fanout adaptation mechanism that enables gossip to adapt participants' contribution with respect to their resources while still preserving its reliability, and (iii) LiFT: a protocol to secure high-bandwidth gossip-based dissemination protocols against freeriders

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Understanding Churn in Decentralized Peer-to-Peer Networks

    Get PDF
    This dissertation presents a novel modeling framework for understanding the dynamics of peer-to-peer (P2P) networks under churn (i.e., random user arrival/departure) and designing systems more resilient against node failure. The proposed models are applicable to general distributed systems under a variety of conditions on graph construction and user lifetimes. The foundation of this work is a new churn model that describes user arrival and departure as a superposition of many periodic (renewal) processes. It not only allows general (non-exponential) user lifetime distributions, but also captures heterogeneous behavior of peers. We utilize this model to analyze link dynamics and the ability of the system to stay connected under churn. Our results offers exact computation of user-isolation and graph-partitioning probabilities for any monotone lifetime distribution, including heavy-tailed cases found in real systems. We also propose an age-proportional random-walk algorithm for creating links in unstructured P2P networks that achieves zero isolation probability as system size becomes infinite. We additionally obtain many insightful results on the transient distribution of in-degree, edge arrival process, system size, and lifetimes of live users as simple functions of the aggregate lifetime distribution. The second half of this work studies churn in structured P2P networks that are usually built upon distributed hash tables (DHTs). Users in DHTs maintain two types of neighbor sets: routing tables and successor/leaf sets. The former tables determine link lifetimes and routing performance of the system, while the latter are built for ensuring DHT consistency and connectivity. Our first result in this area proves that robustness of DHTs is mainly determined by zone size of selected neighbors, which leads us to propose a min-zone algorithm that significantly reduces link churn in DHTs. Our second result uses the Chen-Stein method to understand concurrent failures among strongly dependent successor sets of many DHTs and finds an optimal stabilization strategy for keeping Chord connected under churn

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum
    corecore