547 research outputs found

    Using reliable multicast for caching and collaboration within the world wide web

    Get PDF
    Journal ArticleThe World Wide Web has become an important medium for information dissemination. One model for synchronized information dissemination within the Web is webcasting in which data are simultaneously distributed to multiple destinations. The Web's traditional unicast client/server communication model suffers, however, when applied to webcasting; approaches that require many clients to simultaneously fetch data from the origin server using the client/server model will likely cause server and link overload. In this paper we describe a webcast design that improves upon previous designs by leveraging application level framing (ALF) design methodology. We build upon the Scalable Reliable Multicast (SRM) framework, which is based upon ALF, to create a custom protocol to meet webcast's scalability needs. We employ the protocol in an architecture consisting of two reusable components: a webcache component and a browser control component. We have implemented our design using a new SRM library called libsrm. We present the results of a simple performance evaluation and report on lessons learned while using libsrm

    Coded caching in a multi-server system with random topology

    Get PDF
    Cache-aided content delivery is studied in a multi-server system with P servers and K users, each equipped with a local cache memory. In the delivery phase, each user connects randomly to any ρ out of P servers. Thanks to the availability of multiple servers, which model small-cell base stations (SBSs), demands can be satisfied with reduced storage capacity at each server and reduced delivery rate per server; however, this also leads to reduced multicasting opportunities compared to the single-server scenario. A joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery. The delivery latency is studied for both successive and parallel transmissions from the servers. It is shown that, with successive transmissions the achievable average delivery latency is comparable to the one achieved in the single-server scenario, while the gap between the two depends on ρ, the available redundancy across the servers, and can be reduced by increasing the storage capacity at the SBSs. The optimality of the proposed scheme with uncoded cache placement and MDS-coded server storage is also proved for successive transmissions

    Supporting distributed computation over wide area gigabit networks

    Get PDF
    The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions

    Traffic and Resource Management in Robust Cloud Data Center Networks

    Get PDF
    Cloud Computing is becoming the mainstream paradigm, as organizations, both large and small, begin to harness its benefits. Cloud computing gained its success for giving IT exactly what it needed: The ability to grow and shrink computing resources, on the go, in a cost-effective manner, without the anguish of infrastructure design and setup. The ability to adapt computing demands to market fluctuations is just one of the many benefits that cloud computing has to offer, this is why this new paradigm is rising rapidly. According to a Gartner report, the total sales of the various cloud services will be worth 204 billion dollars worldwide in 2016. With this massive growth, the performance of the underlying infrastructure is crucial to its success and sustainability. Currently, cloud computing heavily depends on data centers for its daily business needs. In fact, it is through the virtualization of data centers that the concept of "computing as a utility" emerged. However, data center virtualization is still in its infancy; and there exists a plethora of open research issues and challenges related to data center virtualization, including but not limited to, optimized topologies and protocols, embedding design methods and online algorithms, resource provisioning and allocation, data center energy efficiency, fault tolerance issues and fault tolerant design, improving service availability under failure conditions, enabling network programmability, etc. This dissertation will attempt to elaborate and address key research challenges and problems related to the design and operation of efficient virtualized data centers and data center infrastructure for cloud services. In particular, we investigate the problem of scalable traffic management and traffic engineering methods in data center networks and present a decomposition method to exactly solve the problem with considerable runtime improvement over mathematical-based formulations. To maximize the network's admissibility and increase its revenue, cloud providers must make efficient use of their's network resources. This goal is highly correlated with the employed resource allocation/placement schemes; formally known as the virtual network embedding problem. This thesis looks at multi-facets of this latter problem; in particular, we study the embedding problem for services with one-to-many communication mode; or what we denote as the multicast virtual network embedding problem. Then, we tackle the survivable virtual network embedding problem by proposing a fault-tolerance design that provides guaranteed service continuity in the event of server failure. Furthermore, we consider the embedding problem for elastic services in the event of heterogeneous node failures. Finally, in the effort to enable and support data center network programmability, we study the placement problem of softwarized network functions (e.g., load balancers, firewalls, etc.), formally known as the virtual network function assignment problem. Owing to its combinatorial complexity, we propose a novel decomposition method, and we numerically show that it is hundred times faster than mathematical formulations from recent existing literature
    corecore