133 research outputs found

    Inferring Network Usage from Passive Measurements in ISP Networks: Bringing Visibility of the Network to Internet Operators

    Get PDF
    The Internet is evolving with us along the time, nowadays people are more dependent of it, being used for most of the simple activities of their lives. It is not uncommon use the Internet for voice and video communications, social networking, banking and shopping. Current trends in Internet applications such as Web 2.0, cloud computing, and the internet of things are bound to bring higher traffic volume and more heterogeneous traffic. In addition, privacy concerns and network security traits have widely promoted the usage of encryption on the network communications. All these factors make network management an evolving environment that becomes every day more difficult. This thesis focuses on helping to keep track on some of these changes, observing the Internet from an ISP viewpoint and exploring several aspects of the visibility of a network, giving insights on what contents or services are retrieved by customers and how these contents are provided to them. Generally, inferring these information, it is done by means of characterization and analysis of data collected using passive traffic monitoring tools on operative networks. As said, analysis and characterization of traffic collected passively is challenging. Internet end-users are not controlled on the network traffic they generate. Moreover, this traffic in the network might be encrypted or coded in a way that is unfeasible to decode, creating the need for reverse engineering for providing a good picture to the Internet operator. In spite of the challenges, it is presented a characterization of P2P-TV usage of a commercial, proprietary and closed application, that encrypts or encodes its traffic, making quite difficult discerning what is going on by just observing the data carried by the protocol. Then it is presented DN-Hunter, which is an application for rendering visible a great part of the network traffic even when encryption or encoding is available. Finally, it is presented a case study of DNHunter for understanding Amazon Web Services, the most prominent cloud provider that offers computing, storage, and content delivery platforms. In this paper is unveiled the infrastructure, the pervasiveness of content and their traffic allocation policies. Findings reveal that most of the content residing on cloud computing and Internet storage infrastructures is served by one single Amazon datacenter located in Virginia despite it appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to outages, as it is observed in the datasets presented

    Efficient Content Distribution With Managed Swarms

    Full text link
    Content distribution has become increasingly important as people have become more reliant on Internet services to provide large multimedia content. Efficiently distributing content is a complex and difficult problem: large content libraries are often distributed across many physical hosts, and each host has its own bandwidth and storage constraints. Peer-to-peer and peer-assisted download systems further complicate content distribution. By contributing their own bandwidth, end users can improve overall performance and reduce load on servers, but end users have their own motivations and incentives that are not necessarily aligned with those of content distributors. Consequently, existing content distributors either opt to serve content exclusively from hosts under their direct control, and thus neglect the large pool of resources that end users can offer, or they allow end users to contribute bandwidth at the expense of sacrificing complete control over available resources. This thesis introduces a new approach to content distribution that achieves high performance for distributing bulk content, based on managed swarms. Managed swarms efficiently allocate bandwidth from origin servers, in-network caches, and end users to achieve system-wide performance objectives. Managed swarming systems are characterized by the presence of a logically centralized coordinator that maintains a global view of the system and directs hosts toward an efficient use of bandwidth. The coordinator allocates bandwidth from each host based on empirical measurements of swarm behavior combined with a new model of swarm dynamics. The new model enables the coordinator to predict how swarms will respond to changes in bandwidth based on past measurements of their performance. In this thesis, we focus on the global objective of maximizing download bandwidth across end users in the system. To that end, we introduce two algorithms that the coordinator can use to compute efficient allocations of bandwidth for each host that result in high download speeds for clients. We have implemented a scalable coordinator that uses these algorithms to maximize system-wide aggregate bandwidth. The coordinator actively measures swarm dynamics and uses the data to calculate, for each host, a bandwidth allocation among the swarms competing for the host's bandwidth. Extensive simulations and a live deployment show that managed swarms significantly outperform centralized distribution services as well as completely decentralized peer-to-peer systems

    Cache Bandwidth Allocation for P2P File-Sharing Systems to Minimize Inter-ISP Traffic

    Full text link

    AngelCast: cloud-based peer-assisted live streaming using optimized multi-tree construction

    Full text link
    Increasingly, commercial content providers (CPs) offer streaming solutions using peer-to-peer (P2P) architectures, which promises significant scalabil- ity by leveraging clients’ upstream capacity. A major limitation of P2P live streaming is that playout rates are constrained by clients’ upstream capac- ities – typically much lower than downstream capacities – which limit the quality of the delivered stream. To leverage P2P architectures without sacri- ficing quality, CPs must commit additional resources to complement clients’ resources. In this work, we propose a cloud-based service AngelCast that enables CPs to complement P2P streaming. By subscribing to AngelCast, a CP is able to deploy extra resources (angel), on-demand from the cloud, to maintain a desirable stream quality. Angels do not download the whole stream, nor are they in possession of it. Rather, angels only relay the minimal fraction of the stream necessary to achieve the desired quality. We provide a lower bound on the minimum angel capacity needed to maintain a desired client bit-rate, and develop a fluid model construction to achieve it. Realizing the limitations of the fluid model construction, we design a practical multi- tree construction that captures the spirit of the optimal construction, and avoids its limitations. We present a prototype implementation of AngelCast, along with experimental results confirming the feasibility of our service.Supported in part by NSF awards #0720604, #0735974, #0820138, #0952145, #1012798 #1012798 #1430145 #1414119. (0720604 - NSF; 0735974 - NSF; 0820138 - NSF; 0952145 - NSF; 1012798 - NSF; 1430145 - NSF; 1414119 - NSF

    Optimizing on-demand resource deployment for peer-assisted content delivery

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for services in a pee-to-peer (P2P) fashion. Such peer-assisted service paradigm promises significant infrastructure cost reduction, but suffers from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to clients especially for real-time applications where content can not be cached. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to efficiently utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the upstream capacity of clients. We target three applications that require the delivery of real-time as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time - the time it takes to deliver the content to all clients in a group. The second application is live video streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for clients running bandwidth-intensive applications. For each of the above applications, we develop analytical models that efficiently allocate the already available resources. They also efficiently allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate these techniques through simulation and/or implementation

    Optimizing on-demand resource deployment for peer-assisted content delivery (PhD thesis)

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for service in a peer-to-peer (P2P) fashion. Such peer-assisted service paradigms promise significant infrastructure cost reduction, but suffer from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to the clients. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to optimally utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the uplink capacity of clients. We target three applications that require the delivery of fresh as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time -- the time it takes to deliver the content to all clients in a group. The second application is live streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for bandwidth-intensive applications. For each of the above applications, we develop mathematical models that optimally allocate the already available resources. They also optimally allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate those techniques through simulation and/or implementation. (Major Advisor: Azer Bestavros

    SNAP : A Software-Defined & Named-Data Oriented Publish-Subscribe Framework for Emerging Wireless Application Systems

    Get PDF
    The evolution of Cyber-Physical Systems (CPSs) has given rise to an emergent class of CPSs defined by ad-hoc wireless connectivity, mobility, and resource constraints in computation, memory, communications, and battery power. These systems are expected to fulfill essential roles in critical infrastructure sectors. Vehicular Ad-Hoc Network (VANET) and a swarm of Unmanned Aerial Vehicles (UAV swarm) are examples of such systems. The significant utility of these systems, coupled with their economic viability, is a crucial indicator of their anticipated growth in the future. Typically, the tasks assigned to these systems have strict Quality-of-Service (QoS) requirements and require sensing, perception, and analysis of a substantial amount of data. To fulfill these QoS requirements, the system requires network connectivity, data dissemination, and data analysis methods that can operate well within a system\u27s limitations. Traditional Internet protocols and methods for network connectivity and data dissemination are typically designed for well-engineering cyber systems and do not comprehensively support this new breed of emerging systems. The imminent growth of these CPSs presents an opportunity to develop broadly applicable methods that can meet the stated system requirements for a diverse range of systems and integrate these systems with the Internet. These methods could potentially be standardized to achieve interoperability among various systems of the future. This work presents a solution that can fulfill the communication and data dissemination requirements of a broad class of emergent CPSs. The two main contributions of this work are the Application System (APPSYS) system abstraction, and a complementary communications framework called the Software-Defined NAmed-data enabled Publish-Subscribe (SNAP) communication framework. An APPSYS is a new breed of Internet application representing the mobile and resource-constrained CPSs supporting data-intensive and QoS-sensitive safety-critical tasks, referred to as the APPSYS\u27s mission. The functioning of the APPSYS is closely aligned with the needs of the mission. The standard APPSYS architecture is distributed and partitions the system into multiple clusters where each cluster is a hierarchical sub-network. The SNAP communication framework within the APPSYS utilized principles of Information-Centric Networking (ICN) through the publish-subscribe communication paradigm. It further extends the role of brokers within the publish-subscribe paradigm to create a distributed software-defined control plane. The SNAP framework leverages the APPSYS design characteristics to provide flexible and robust communication and dynamic and distributed control-plane decision-making that successfully allows the APPSYS to meet the communication requirements of data-oriented and QoS-sensitive missions. In this work, we present the design, implementation, and performance evaluation of an APPSYS through an exemplar UAV swarm APPSYS. We evaluate the benefits offered by the APPSYS design and the SNAP communication framework in meeting the dynamically changed requirements of a data-intensive and QoS-sensitive Coordinated Search and Tracking (CSAT) mission operating in a UAV swarm APPSYS on the battlefield. Results from the performance evaluation demonstrate that the UAV swarm APPSYS successfully monitors and mitigates network impairment impacting a mission\u27s QoS to support the mission\u27s QoS requirements

    An architecture for adaptive task planning in support of IoT-based machine learning applications for disaster scenarios

    Get PDF
    The proliferation of the Internet of Things (IoT) in conjunction with edge computing has recently opened up several possibilities for several new applications. Typical examples are Unmanned Aerial Vehicles (UAV) that are deployed for rapid disaster response, photogrammetry, surveillance, and environmental monitoring. To support the flourishing development of Machine Learning assisted applications across all these networked applications, a common challenge is the provision of a persistent service, i.e., a service capable of consistently maintaining a high level of performance, facing possible failures. To address these service resilient challenges, we propose APRON, an edge solution for distributed and adaptive task planning management in a network of IoT devices, e.g., drones. Exploiting Jackson's network model, our architecture applies a novel planning strategy to better support control and monitoring operations while the states of the network evolve. To demonstrate the functionalities of our architecture, we also implemented a deep-learning based audio-recognition application using the APRON NorthBound interface, to detect human voices in challenged networks. The application's logic uses Transfer Learning to improve the audio classification accuracy and the runtime of the UAV-based rescue operations

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet
    • …
    corecore