9,229 research outputs found

    Building global and scalable systems with atomic multicast

    Get PDF
    The rise of worldwide Internet-scale services demands large distributed systems. Indeed, when handling several millions of users, it is common to operate thousands of servers spread across the globe. Here, replication plays a central role, as it contributes to improve the user experience by hiding failures and by providing acceptable latency. In this thesis, we claim that atomic multicast, with strong and well-defined properties, is the appropriate abstraction to efficiently design and implement globally scalable distributed systems. Internet-scale services rely on data partitioning and replication to provide scalable performance and high availability. Moreover, to reduce user-perceived response times and tolerate disasters (i.e., the failure of a whole datacenter), services are increasingly becoming geographically distributed. Data partitioning and replication, combined with local and geographical distribution, introduce daunting challenges, including the need to carefully order requests among replicas and partitions. One way to tackle this problem is to use group communication primitives that encapsulate order requirements. While replication is a common technique used to design such reliable distributed systems, to cope with the requirements of modern cloud based ``always-on'' applications, replication protocols must additionally allow for throughput scalability and dynamic reconfiguration, that is, on-demand replacement or provisioning of system resources. We propose a dynamic atomic multicast protocol which fulfills these requirements. It allows to dynamically add and remove resources to an online replicated state machine and to recover crashed processes. Major efforts have been spent in recent years to improve the performance, scalability and reliability of distributed systems. In order to hide the complexity of designing distributed applications, many proposals provide efficient high-level communication abstractions. Since the implementation of a production-ready system based on this abstraction is still a major task, we further propose to expose our protocol to developers in the form of distributed data structures. B-trees for example, are commonly used in different kinds of applications, including database indexes or file systems. Providing a distributed, fault-tolerant and scalable data structure would help developers to integrate their applications in a distribution transparent manner. This work describes how to build reliable and scalable distributed systems based on atomic multicast and demonstrates their capabilities by an implementation of a distributed ordered map that supports dynamic re-partitioning and fast recovery. To substantiate our claim, we ported an existing SQL database atop of our distributed lock-free data structure. Here, replication plays a central role, as it contributes to improve the user experience by hiding failures and by providing acceptable latency. In this thesis, we claim that atomic multicast, with strong and well-defined properties, is the appropriate abstraction to efficiently design and implement globally scalable distributed systems. Internet-scale services rely on data partitioning and replication to provide scalable performance and high availability. Moreover, to reduce user-perceived response times and tolerate disasters (i.e., the failure of a whole datacenter), services are increasingly becoming geographically distributed. Data partitioning and replication, combined with local and geographical distribution, introduce daunting challenges, including the need to carefully order requests among replicas and partitions. One way to tackle this problem is to use group communication primitives that encapsulate order requirements. While replication is a common technique used to design such reliable distributed systems, to cope with the requirements of modern cloud based ``always-on'' applications, replication protocols must additionally allow for throughput scalability and dynamic reconfiguration, that is, on-demand replacement or provisioning of system resources. We propose a dynamic atomic multicast protocol which fulfills these requirements. It allows to dynamically add and remove resources to an online replicated state machine and to recover crashed processes. Major efforts have been spent in recent years to improve the performance, scalability and reliability of distributed systems. In order to hide the complexity of designing distributed applications, many proposals provide efficient high-level communication abstractions. Since the implementation of a production-ready system based on this abstraction is still a major task, we further propose to expose our protocol to developers in the form of distributed data structures. B- trees for example, are commonly used in different kinds of applications, including database indexes or file systems. Providing a distributed, fault-tolerant and scalable data structure would help developers to integrate their applications in a distribution transparent manner. This work describes how to build reliable and scalable distributed systems based on atomic multicast and demonstrates their capabilities by an implementation of a distributed ordered map that supports dynamic re-partitioning and fast recovery. To substantiate our claim, we ported an existing SQL database atop of our distributed lock-free data structure

    Reliable multicast transport by satellite: a hybrid satellite/terrestrial solution with erasure codes

    Get PDF
    Geostationary satellites are an efficient way to provide a large scale multipoint communication service. In the context of reliable multicast communications, a new hybrid satellite/terrestrial approach is proposed. It aims at reducing the overall communication cost using satellite broadcasting only when enough receivers are present, and terrestrial transmissions otherwise. This approach has been statistically evaluated for a particular cost function and seems interesting. Then since the hybrid approach relies on Forward Error Correction, several practical aspects of MDS codes and LDPC codes are investigated in order to select a code

    Quality of Service over Specific Link Layers: state of the art report

    Get PDF
    The Integrated Services concept is proposed as an enhancement to the current Internet architecture, to provide a better Quality of Service (QoS) than that provided by the traditional Best-Effort service. The features of the Integrated Services are explained in this report. To support Integrated Services, certain requirements are posed on the underlying link layer. These requirements are studied by the Integrated Services over Specific Link Layers (ISSLL) IETF working group. The status of this ongoing research is reported in this document. To be more specific, the solutions to provide Integrated Services over ATM, IEEE 802 LAN technologies and low-bitrate links are evaluated in detail. The ISSLL working group has not yet studied the requirements, that are posed on the underlying link layer, when this link layer is wireless. Therefore, this state of the art report is extended with an identification of the requirements that are posed on the underlying wireless link, to provide differentiated Quality of Service

    BitTorrent Sync: Network Investigation Methodology

    Full text link
    The volume of personal information and data most Internet users find themselves amassing is ever increasing and the fast pace of the modern world results in most requiring instant access to their files. Millions of these users turn to cloud based file synchronisation services, such as Dropbox, Microsoft Skydrive, Apple iCloud and Google Drive, to enable "always-on" access to their most up-to-date data from any computer or mobile device with an Internet connection. The prevalence of recent articles covering various invasion of privacy issues and data protection breaches in the media has caused many to review their online security practices with their personal information. To provide an alternative to cloud based file backup and synchronisation, BitTorrent Inc. released an alternative cloudless file backup and synchronisation service, named BitTorrent Sync to alpha testers in April 2013. BitTorrent Sync's popularity rose dramatically throughout 2013, reaching over two million active users by the end of the year. This paper outlines a number of scenarios where the network investigation of the service may prove invaluable as part of a digital forensic investigation. An investigation methodology is proposed outlining the required steps involved in retrieving digital evidence from the network and the results from a proof of concept investigation are presented.Comment: 9th International Conference on Availability, Reliability and Security (ARES 2014

    Cellular-Broadcast Service Convergence through Caching for CoMP Cloud RANs

    Get PDF
    Cellular and Broadcast services have been traditionally treated independently due to the different market requirements, thus resulting in different business models and orthogonal frequency allocations. However, with the advent of cheap memory and smart caching, this traditional paradigm can converge into a single system which can provide both services in an efficient manner. This paper focuses on multimedia delivery through an integrated network, including both a cellular (also known as unicast or broadband) and a broadcast last mile operating over shared spectrum. The subscribers of the network are equipped with a cache which can effectively create zero perceived latency for multimedia delivery, assuming that the content has been proactively and intelligently cached. The main objective of this work is to establish analytically the optimal content popularity threshold, based on a intuitive cost function. In other words, the aim is to derive which content should be broadcasted and which content should be unicasted. To facilitate this, Cooperative Multi- Point (CoMP) joint processing algorithms are employed for the uni and broad-cast PHY transmissions. To practically implement this, the integrated network controller is assumed to have access to traffic statistics in terms of content popularity. Simulation results are provided to assess the gain in terms of total spectral efficiency. A conventional system, where the two networks operate independently, is used as benchmark.Comment: Submitted to IEEE PIMRC 201
    • …
    corecore