1,080 research outputs found

    The state of peer-to-peer network simulators

    Get PDF
    Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results

    Internames: a name-to-name principle for the future Internet

    Full text link
    We propose Internames, an architectural framework in which names are used to identify all entities involved in communication: contents, users, devices, logical as well as physical points involved in the communication, and services. By not having a static binding between the name of a communication entity and its current location, we allow entities to be mobile, enable them to be reached by any of a number of basic communication primitives, enable communication to span networks with different technologies and allow for disconnected operation. Furthermore, with the ability to communicate between names, the communication path can be dynamically bound to any of a number of end-points, and the end-points themselves could change as needed. A key benefit of our architecture is its ability to accommodate gradual migration from the current IP infrastructure to a future that may be a ubiquitous Information Centric Network. Basic building blocks of Internames are: i) a name-based Application Programming Interface; ii) a separation of identifiers (names) and locators; iii) a powerful Name Resolution Service (NRS) that dynamically maps names to locators, as a function of time/location/context/service; iv) a built-in capacity of evolution, allowing a transparent migration from current networks and the ability to include as particular cases current specific architectures. To achieve this vision, shared by many other researchers, we exploit and expand on Information Centric Networking principles, extending ICN functionality beyond content retrieval, easing send-to-name and push services, and allowing to use names also to route data in the return path. A key role in this architecture is played by the NRS, which allows for the co-existence of multiple network "realms", including current IP and non-IP networks, glued together by a name-to-name overarching communication primitive.Comment: 6 page

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    TimeTrader: Exploiting Latency Tail to Save Datacenter Energy for On-line Data-Intensive Applications

    Get PDF
    Datacenters running on-line, data-intensive applications (OLDIs) consume significant amounts of energy. However, reducing their energy is challenging due to their tight response time requirements. A key aspect of OLDIs is that each user query goes to all or many of the nodes in the cluster, so that the overall time budget is dictated by the tail of the replies' latency distribution; replies see latency variations both in the network and compute. Previous work proposes to achieve load-proportional energy by slowing down the computation at lower datacenter loads based directly on response times (i.e., at lower loads, the proposal exploits the average slack in the time budget provisioned for the peak load). In contrast, we propose TimeTrader to reduce energy by exploiting the latency slack in the sub- critical replies which arrive before the deadline (e.g., 80% of replies are 3-4x faster than the tail). This slack is present at all loads and subsumes the previous work's load-related slack. While the previous work shifts the leaves' response time distribution to consume the slack at lower loads, TimeTrader reshapes the distribution at all loads by slowing down individual sub-critical nodes without increasing missed deadlines. TimeTrader exploits slack in both the network and compute budgets. Further, TimeTrader leverages Earliest Deadline First scheduling to largely decouple critical requests from the queuing delays of sub- critical requests which can then be slowed down without hurting critical requests. A combination of real-system measurements and at-scale simulations shows that without adding to missed deadlines, TimeTrader saves 15-19% and 41-49% energy at 90% and 30% loading, respectively, in a datacenter with 512 nodes, whereas previous work saves 0% and 31-37%.Comment: 13 page

    Invisible Pixels Are Dead, Long Live Invisible Pixels!

    Full text link
    Privacy has deteriorated in the world wide web ever since the 1990s. The tracking of browsing habits by different third-parties has been at the center of this deterioration. Web cookies and so-called web beacons have been the classical ways to implement third-party tracking. Due to the introduction of more sophisticated technical tracking solutions and other fundamental transformations, the use of classical image-based web beacons might be expected to have lost their appeal. According to a sample of over thirty thousand images collected from popular websites, this paper shows that such an assumption is a fallacy: classical 1 x 1 images are still commonly used for third-party tracking in the contemporary world wide web. While it seems that ad-blockers are unable to fully block these classical image-based tracking beacons, the paper further demonstrates that even limited information can be used to accurately classify the third-party 1 x 1 images from other images. An average classification accuracy of 0.956 is reached in the empirical experiment. With these results the paper contributes to the ongoing attempts to better understand the lack of privacy in the world wide web, and the means by which the situation might be eventually improved.Comment: Forthcoming in the 17th Workshop on Privacy in the Electronic Society (WPES 2018), Toronto, AC

    QuLa: service selection and forwarding table population in service-centric networking using real-life topologies

    Get PDF
    The amount of services located in the network has drastically increased over the last decade which is why more and more datacenters are located at the network edge, closer to the users. In the current Internet it is up to the client to select a destination using a resolution service (Domain Name System, Content Delivery Networks ...). In the last few years, research on Information-Centric Networking (ICN) suggests to put this selection responsibility at the network components; routers find the closest copy of a content object using the content name as input. We extend the principle of ICN to services; service routers forward requests to service instances located in datacenters spread across the network edge. To solve this problem, we first present a service selection algorithm based on both server and network metrics. Next, we describe a method to reduce the state required in service routers while minimizing the performance loss caused by this data reduction. Simulation results based on real-life networks show that we are able to find a near-optimal load distribution with only minimal state required in the service routers

    Subversion Over OpenNetInf and CCNx

    Get PDF
    We describe experiences and insights from adapting the Subversion version control system to use the network service of two information-centric networking (ICN) prototypes: OpenNetInf and CCNx. The evaluation is done using a local collaboration scenario, common in our own project work where a group of people meet and share documents through a Subversion repository. The measurements show a performance benefit already with two clients in some of the studied scenarios, despite being done on un-optimised research prototypes. The conclusion is that ICN clearly is beneficial also for non mass-distribution applications. It was straightforward to adapt Subversion to fetch updated files from the repository using the ICN network service. The adaptation however neglected access control which will need a different approach in ICN than an authenticated SSL tunnel. Another insight from the experiments is that care needs to be taken when implementing the heavy ICN hash and signature calculations. In the prototypes, these are done serially, but we see an opportunity for parallelisation, making use of current multi-core processors
    corecore