243 research outputs found

    Fast-response Receiver-driven Layered Multicast

    Get PDF
    In this paper, a new layered multicast protocol, called Fast-response Receiver-driven Layered Multicast (FRLM), is proposed. The differences between our FRLM and the original RLM are only at the receivers. Our design allows the receivers to track the available network bandwidth faster; this enables the receivers to converge to their optimal number of subscribed layers quicker, and to respond to the network congestion prompter. An early trigger mechanism for shortening IGMP leave latency is also designed. We show that FRLM can avoid several potential problems with the original RLM, which have been overlooked previously. Last but not the least, FRLM is a practical scheme that can be readily implemented in today's best-effort Internet.published_or_final_versio

    Layered multicast with forward error correction (FEC) for Internet video

    Get PDF
    In this paper, we propose RALF, a new FEC-based error control protocol for layered multicast video. RALF embodies two design principles: decoupling transport layer error control from upper layer mechanisms and decoupling error control and congestion control at the transport layer. RALF works with our previously proposed protocol RALM - a layered multicast congestion control protocol with router assistance. RALF provides tunable error control services for upper layers. It requires no additional complexities in the network beyond those for RALM. Its performance is evaluated through simulations in NS2.published_or_final_versio

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    IP Multicast via Satellite: A Survey

    Get PDF
    Many of the emerging applications in the Internet, such astele-conferencing, distance-learning, distributed games, softwareupdates, and distributed computing would benefit from multicastservices. In many of these applications, there is a need todistribute information to many sites that are widely dispersed fromeach other. Communication satellites are a natural technology optionand are extremely well suited for carrying such services. Despite thepotential of satellite multicast, there exists little support forsatellite IP multicast services. Both Internet Engineering andInternet Research Task Forces (IETF and IRTF) have been involved in aresearch effort to identify the design space for a general purposereliable multicast protocol and standardize certain protocolcomponents as emph{building blocks}. However, for satellitemulticast services, several of these components have a differentdesign space. In this paper, we attempt to provide an overview of thedesign space and the ways in which the network deployment andapplication requirements affect the solution space. We maintain asimilar taxonomy to that of the IETF efforts, and identify which keycomponents of a general multicast protocol are affected by two of themost common satellite network deployment scenarios. We also highlightsome of the issues which we think are critical in the development ofnext generation satellite IP multicast services

    Data Movement Challenges and Solutions with Software Defined Networking

    Get PDF
    With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. Interaction with such remote resources for the operation of media-rich applications in mobile environments is also on the rise. As a result, the performance of the underlying network infrastructure can have a significant impact on the quality of service experienced by the user. Despite receiving significant attention from both academia and industry, computer networks still face a number of challenges. Users oftentimes report and complain about poor experiences with their devices and applications, which can oftentimes be attributed to network performance when downloading or uploading application data. This dissertation investigates problems that arise with data movement across computer networks and proposes novel solutions to address these issues through software defined networking (SDN). SDN is lauded to be the paradigm of choice for next generation networks. While academia explores use cases in various contexts, industry has focused on data center and wide area networks. There is a significant range of complex and application-specific network services that can potentially benefit from SDN, but introduction and adoption of such solutions remains slow in production networks. One impeding factor is the lack of a simple yet expressive enough framework applicable to all SDN services across production network domains. Without a uniform framework, SDN developers create disjoint solutions, resulting in untenable management and maintenance overhead. The SDN-based solutions developed in this dissertation make use of a common agent-based approach. The architecture facilitates application-oriented SDN design with an abstraction composed of software agents on top of the underlying network. There are three key components modern and future networks require to deliver exceptional data transfer performance to the end user: (1) user and application mobility, (2) high throughput data transfer, and (3) efficient and scalable content distribution. Meeting these key components will not only ensure the network can provide robust and reliable end-to-end connectivity, but also that network resources will be used efficiently. First, mobility support is critical for user applications to maintain connectivity to remote, cloud-based resources. Today\u27s network users are frequently accessing such resources while on the go, transitioning from network to network with the expectation that their applications will continue to operate seamlessly. As users perform handovers between heterogeneous networks or between networks across administrative domains, the application becomes responsible for maintaining or establishing new connections to remote resources. Although application developers often account for such handovers, the result is oftentimes visible to the user through diminished quality of service (e.g. rebuffering in video streaming applications). Many intra-domain handover solutions exist for handovers in WiFi and cellular networks, such as mobile IP, but they are architecturally complex and have not been integrated to form a scalable, inter-domain solution. A scalable framework is proposed that leverages SDN features to implement both horizontal and vertical handovers for heterogeneous wireless networks within and across administrative domains. User devices can select an appropriate network using an on-board virtual SDN implementation that manages available network interfaces. An SDN-based counterpart operates in the network core and edge to handle user migrations as they transition from one edge attachment point to another. The framework was developed and deployed as an extension to the Global Environment for Network Innovations (GENI) testbed; however, the framework can be deployed on any OpenFlow enabled network. Evaluation revealed users can maintain existing application connections without breaking the sockets and requiring the application to recover. Second, high throughput data transfer is essential for user applications to acquire large remote data sets. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower Transmission Control Protocol (TCP) throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software and/or network stacks at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. This results in high throughput data transfer that is available to only a select subset of network users who have access to such specialized software. An SDN based solution called Steroid OpenFlow Service (SOS) has been proposed as a network service that transparently increases the throughput of TCP-based data transfers across large networks. SOS shifts the complexity of high performance data transfer from the end user to the network; users do not need to configure anything on the client and server machines participating in the data transfer. The SOS architecture supports seamless high performance data transfer at scale for multiple users and for high bandwidth connections. Emphasis is placed on the use of SOS as a part of a larger, richer data transfer ecosystem, complementing and compounding the efforts of existing data transfer solutions. Non-TCP-based solutions, such as Aspera, can operate seamlessly alongside an SOS deployment, while those based on TCP, such as wget, curl, and GridFTP, can leverage SOS for throughput improvement beyond what a single TCP connection can provide. Through extensive evaluation in real-world environments, the SOS architecture is proven to be flexibly deployable on a variety of network architectures, from cloud-based, to production networks, to scaled up, high performance data center environments. Evaluation showed that the SOS architecture scales linearly through the addition of SOS ñ€Ɠagentsñ€ to the SOS deployment, providing data transfer performance improvement to multiple users simultaneously. An individual data transfer enhanced by SOS was shown to have increased throughput nearly forty times the same data transfer without SOS assistance. Third, efficient and scalable video content distribution is imperative as the demand for multimedia content over the Internet increases. Current state of the art solutions consist of vast content distribution networks (CDNs) where content is oftentimes hosted in duplicate at various geographically distributed locations. Although CDNs are useful for the dissemination of static content, they do not provide a clear and scalable model for the on demand production and distribution of live, streaming content. IP multicast is a popular solution to scalable video content distribution; however, it is seldom used due to deployment and operational complexity. Inspired from the distributed design of todays CDNs and the distribution trees used by IP multicast, a SDN based framework called GENI Cinema (GC) is proposed to allow for the distribution of live video content at scale. GC allows for the efficient management and distribution of live video content at scale without the added architectural complexity and inefficiencies inherent to contemporary solutions such as IP multicast. GC has been deployed as an experimental, nation-wide live video distribution service using the GENI network, broadcasting live and prerecorded video streams from conferences for remote attendees, from the classroom for distance education, and for live sporting events. GC clients can easily and efficiently switch back and forth between video streams with improved switching latency latency over cable, satellite, and other live video providers. The real world dep loyments and evaluation of the proposed solutions show how SDN can be used as a novel way to solve current data transfer problems across computer networks. In addition, this dissertation is expected to provide guidance for designing, deploying, and debugging SDN-based applications across a variety of network topologies

    Active congestion control using ABCD (available bandwidth-based congestion detection).

    Get PDF
    With the growth of the Internet, the problem of congestion has attained the distinction of being a perennial problem. The Internet community has been trying several approaches for improved congestion control techniques. The end-to-end approach is considered to be the most robust one and it has served quite well until recently, when researchers started to explore the information available at the intermediate node level. This approach triggered a new field called Active Networks where intermediate nodes have a much larger role to play than that of the naive nodes. This thesis proposes an active congestion control (ACC) scheme based on Available Bandwidth-based Congestion Detection (ABCD), which regulates the traffic according to network conditions. Dynamic changes in the available bandwidth can trigger re-negotiation of flow rate. We have introduced packet size adjustment at the intermediate router in addition to rate control at sender node, scaled according to the available bandwidth, which is estimated using three packet probes. To verify the improved scheme, we have extended Ted Faber\u27s ACC work in NS-2 simulator. With this simulator we verify ACC-ABCD\u27s gains such as a marginal improvement in average TCP throughput at each endpoint, fewer packet drops and improved fairness index. Our tests on NS-2 prove that the ACC-ABCD technique yields better results as compared to TCP congestion control with or without the cross traffic. Source: Masters Abstracts International, Volume: 43-03, page: 0870. Adviser: A. K. Aggarwal. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    Reliable Multicast Transport for Heterogeneous Mobile IP environment using Cross-Layer Information

    Get PDF
    Reliable multicast transport architecture designed for heterogeneous mobile IP environment using cross-layer information for enhanced Quality of Service (QoS) and seamless handover is discussed. In particular, application-specific reliable multicast retransmission schemes are proposed, which are aimed to minimize the protocol overhead taking into account behaviour of mobile receivers (loss of connectivity and handover) and the specific application requirements for reliable delivery (such as carousel, one-to-many download and streaming delivery combined with recording). The proposed localized retransmission strategies are flexible configured for tree-based multicast transport. Cross layer interactions in order to enhance reliable transport and support seamless handover is discussed considering IEEE 802.21 media independent handover mechanisms. The implementation is based on Linux IPv6 environment. Simulations in ns2 focusing on the benefits of the proposed multicast retransmission schemes for particular application scenarios are presented

    End-to-end security in active networks

    Get PDF
    Active network solutions have been proposed to many of the problems caused by the increasing heterogeneity of the Internet. These ystems allow nodes within the network to process data passing through in several ways. Allowing code from various sources to run on routers introduces numerous security concerns that have been addressed by research into safe languages, restricted execution environments, and other related areas. But little attention has been paid to an even more critical question: the effect on end-to-end security of active flow manipulation. This thesis first examines the threat model implicit in active networks. It develops a framework of security protocols in use at various layers of the networking stack, and their utility to multimedia transport and flow processing, and asks if it is reasonable to give active routers access to the plaintext of these flows. After considering the various security problem introduced, such as vulnerability to attacks on intermediaries or coercion, it concludes not. We then ask if active network systems can be built that maintain end-to-end security without seriously degrading the functionality they provide. We describe the design and analysis of three such protocols: a distributed packet filtering system that can be used to adjust multimedia bandwidth requirements and defend against denial-of-service attacks; an efficient composition of link and transport-layer reliability mechanisms that increases the performance of TCP over lossy wireless links; and a distributed watermarking servicethat can efficiently deliver media flows marked with the identity of their recipients. In all three cases, similar functionality is provided to designs that do not maintain end-to-end security. Finally, we reconsider traditional end-to-end arguments in both networking and security, and show that they have continuing importance for Internet design. Our watermarking work adds the concept of splitting trust throughout a network to that model; we suggest further applications of this idea

    Optimization-based rate control in overlay multicast.

    Get PDF
    Zhang Lin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 74-78).Abstracts in English and Chinese.Chapter Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Why use economic models? --- p.1Chapter 1.2 --- Why Overlay? --- p.2Chapter 1.3 --- Our Contribution --- p.3Chapter 1.4 --- Thesis Organization --- p.5Chapter Chapter 2 --- Related Works --- p.7Chapter 2.1 --- Overlay Multicast --- p.7Chapter 2.2 --- IP Multicast Congestion Control --- p.11Chapter 2.2.1 --- Architecture Elements of IP Multicast Congestion Control --- p.11Chapter 2.2.2 --- Evaluation of Multicast Video --- p.13Chapter 2.2.3 --- End-to-End Schemes --- p.14Chapter 2.2.4 --- Router-supported Schemes --- p.16Chapter 2.2.5 --- Conclusion --- p.19Chapter 2.3 --- Optimization-based Rate Control in IP unicast and multicast --- p.20Chapter 2.3.1 --- Optimization-based Rate Control for Unicast Sessions --- p.21Chapter 2.3.2 --- Optimization-based Rate Control for Multi-rate Multicast Sessions --- p.24Chapter Chapter 3 --- Overlay Multicast Rate Control Algorithms --- p.27Chapter 3.1 --- Motivations --- p.27Chapter 3.2 --- Problem Statement --- p.28Chapter 3.2.1 --- Network Model --- p.28Chapter 3.2.2 --- Problem Formulation --- p.29Chapter 3.2.3 --- Algorithm Requirement --- p.33Chapter 3.3 --- Primal-based Algorithm --- p.34Chapter 3.3.1 --- Notations --- p.34Chapter 3.3.2 --- An Iterative Algorithm --- p.36Chapter 3.3.3 --- Convergence Analysis --- p.37Chapter 3.3.3.1 --- Assumptions --- p.37Chapter 3.3.3.2 --- Convergence with various step-sizes --- p.39Chapter 3.3.3.3 --- Theorem Explanations --- p.39Chapter 3.4 --- Dual-based Algorithm --- p.40Chapter 3.4.1 --- The Dual Problem --- p.41Chapter 3.4.2 --- Subgradient Algorithm --- p.43Chapter 3.4.3 --- Interpretation of the Prices --- p.44Chapter 3.4.4 --- Convergence Analysis --- p.45Chapter Chapter 4 --- Protocol Description and Performance Evaluation --- p.47Chapter 4.1 --- Motivations --- p.47Chapter 4.2 --- Protocols --- p.47Chapter 4.2.1 --- Notations --- p.48Chapter 4.2.2 --- Protocol for primal-based algorithm --- p.48Chapter 4.2.3 --- Protocol for dual-based algorithm --- p.53Chapter 4.3 --- Performance Evaluation --- p.57Chapter 4.3.1 --- Simulation Setup --- p.57Chapter 4.3.2 --- Rate Convergence Properties --- p.59Chapter 4.3.3 --- Data Rate Constraint --- p.67Chapter 4.3.4 --- Link Measurement Overhead --- p.68Chapter 4.3.5 --- Communication Overhead --- p.70Chapter Chapter 5 --- Conclusion Remarks and Future Work --- p.73References --- p.7
    • 

    corecore