44 research outputs found

    Tangled:A Cooperative Anycast Testbed

    Get PDF
    Anycast routing is an area of studies that has been attracting interest of several researchers in recent years. Most anycast studies conducted in the past relied on coarse measurement data, mainly due to the lack of infrastructure where it is possible to test and collect data at same time. In this paper we present Tangled, an anycast test environment where researchers can run experiments and better understand the impacts of their proposals on a global infrastructure connected to the Internet

    Crux: Locality-Preserving Distributed Services

    Full text link
    Distributed systems achieve scalability by distributing load across many machines, but wide-area deployments can introduce worst-case response latencies proportional to the network's diameter. Crux is a general framework to build locality-preserving distributed systems, by transforming an existing scalable distributed algorithm A into a new locality-preserving algorithm ALP, which guarantees for any two clients u and v interacting via ALP that their interactions exhibit worst-case response latencies proportional to the network latency between u and v. Crux builds on compact-routing theory, but generalizes these techniques beyond routing applications. Crux provides weak and strong consistency flavors, and shows latency improvements for localized interactions in both cases, specifically up to several orders of magnitude for weakly-consistent Crux (from roughly 900ms to 1ms). We deployed on PlanetLab locality-preserving versions of a Memcached distributed cache, a Bamboo distributed hash table, and a Redis publish/subscribe. Our results indicate that Crux is effective and applicable to a variety of existing distributed algorithms.Comment: 11 figure

    An anycast based feedback aggregation scheme for efficient network transparency in cross-layer design

    Get PDF
    To ensure Quality of Service for multimedia data sessions in next generation mobile telecommunication systems, jointly-optimized cross-layer architectures were introduced recently. Such shemes usually require an adaptive media source which is able to modify the main parameters of ongoing connections by transferring control and feedback information via the network and through different protocol layers from application layer to physical layer and vice versa, according to the actual state of the path between peer nodes. This concept of transmitting cross-layer information is referred as network transparency in the literature, meaning that the underlying infrastructure is almost invisible to all the entities involved in joint optimization due to the continuous conveyance of cross-layer feedbacks. In this paper we introduce and evaluate a possible solution for reducing the network overhead caused by this volume of information exchange. Our soulution is based on the anycasting communication paradigm and creates a hierarchical data aggregation scheme allowing to adapt each entity of the multimedia transmission chain based on frequent feedbacks and even so in a low-bandwitdh manner

    RFID in the Cloud: A Service for High-Speed Data Access in Distributed Value Chains

    Get PDF
    Radio-Frequency Identification (RFID) is emerging as an important technology for exchanging information about physical objects along distributed value chains. The influential standardization organization EPCglobal has released standards for RFID-based data exchange that follow the data-on-network paradigm. Here, the business-relevant object data is provided by network services, whereas RFID tags are only used to carry a reference number for data retrieval via the Internet. However, as we show in this paper, this paradigm can result in long response times for data access. We present experiments that explore what factors impact the response times and identify obstacles in current architectures. Based on these analyses, we designed a cloud-based service that realizes high-speed data access for data-on-network solutions. We further present simulation experiments analyzing the benefits of our cloud-based concept with regards to fast RFID-data access and reduced infrastructure cost through scale effects

    MACS: deep reinforcement learning based SDN controller synchronization policy design

    Get PDF
    In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralised control, scalability, and reliability requirements. In such networking paradigms, controllers synchronize with each other, in attempts to maintain a logically centralised network view. Despite the presence of various design proposals for distributed SDN controller architectures, most existing works only aim at eliminating anomalies arising from the inconsistencies in different controllers' network views. However, the performance aspect of controller synchronization designs with respect to given SDN applications are generally missing. To fill this gap, we formulate the controller synchronization problem as a Markov decision process (MDP) and apply reinforcement learning techniques combined with deep neural networks (DNNs) to train a smart, scalable, and fine-grained controller synchronization policy, called the Multi-Armed Cooperative Synchronization (MACS), whose goal is to maximise the performance enhancements brought by controller synchronizations. Evaluation results confirm the DNN's exceptional ability in abstracting latent patterns in the distributed SDN environment, rendering significant superiority to MACS-based synchronization policy, which are 56% and 30% performance improvements over ONOS and greedy SDN controller synchronization heuristics

    Cloud Computing for Chemical Activity Prediction

    Get PDF
    Abstract-This paper describes how cloud computing has been used to reduce the time taken to generate chemical activity models from years to weeks. Chemists use Quantitative Structure-Activity Relationship (QSAR) models to predict the activity of molecules. Existing Discovery Bus software builds these models automatically from datasets containing known molecular activities, using a "panel of experts" algorithm. Newly available datasets offer the prospect of generating a large number of significantly better models, but the Discovery Bus would have taken over 5 years to compute them. Fortunately, we show that the "panel of experts" algorithm is well-matched to clouds. In the paper we describe the design of a scalable, Windows Azure based infrastructure for the panel of experts pattern. We present the results of a run in which up to 100 Azure nodes were used to generate results from the new datasets in 3 weeks
    corecore