303 research outputs found

    Network architecture for large-scale distributed virtual environments

    Get PDF
    Distributed Virtual Environments (DVEs) provide 3D graphical computer generated environments with stereo sound, supporting real-time collaboration between potentially large numbers of users distributed around the world. Early DVEs has been used over local area networks (LANs). Recently with the Internet's development into the most common embedding for DVEs these distributed applications have been moved towards an exploiting IP networks. This has brought the scalability challenges into the DVEs evolution. The network bandwidth resource is the more limited resource of the DVE system and to improve the DVE's scalability it is necessary to manage carefully this resource. To achieve the saving in the network bandwidth the different types of the network traffic that is produced by the DVEs have to be considered. DVE applications demand· exchange of the data that forms different types of traffic such as a computer data type, video and audio, and a 3D data type to keep the consistency of the application's state. The problem is that the meeting of the QoS requirements of both control and continuous media traffic already have been covered by the existing research. But QoS for transfer of the 3D information has not really been considered. The 3D DVE geometry traffic is very bursty in nature and places a high demands on the network for short intervals of time due to the quite large size of the 3D models and the DVE application requirements to transmit a 3D data as quick as possible. The main motivation in carrying out the work presented in this thesis is to find a solution to improve the scalability of the DVE applications by a consideration the QoS requirements of the 3D DVE geometrical data type. In this work we are investigating the possibility to decrease the network bandwidth utilization by the 3D DVE traffic using the level of detail (LOD) concept and the active networking approach. The background work of the thesis surveys the DVE applications and the scalability requirements of the DVE systems. It also discusses the active networks and multiresolution representation and progressive transmission of the 3D data. The new active networking approach to the transmission of the 3D geometry data within the DVE systems is proposed in this thesis. This approach enhances the currently applied peer-to-peer DVE architecture by adding to the peer-to-peer multicast neny_ork layer filtering of the 3D flows an application level filtering on the active intermediate nodes. The active router keeps the application level information about the placements of users. This information is used by active routers to prune more detailed 3D data flows (higher LODs) in the multicast tree arches that are linked to the distance DVE participants. The exploration of possible benefits of exploiting the proposed active approach through the comparison with the non-active approach is carried out using the simulation­based performance modelling approach. Complex interactions between participants in DVE application and a large number of analyzed variables indicate that flexible simulation is more appropriate than mathematical modelling. To build a test bed will not be feasible. Results from the evaluation demonstrate that the proposed active approach shows potential benefits to the improvement of the DVE's scalability but the degree of improvement depends on the users' movement pattern. Therefore, other active networking methods to support the 3D DVE geometry transmission may also be required

    Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

    Get PDF
    In this paper, we address the problem of efficiently streaming a set of heterogeneous videos from a remote server through a proxy to multiple asynchronous clients so that they can experience playback with low startup delays. We develop a technique to analytically determine the optimal proxy prefix cache allocation to the videos that minimizes the aggregate network bandwidth cost. We integrate proxy caching with traditional serverbased reactive transmission schemes such as batching, patching and stream merging to develop a set of proxy-assisted delivery schemes. We quantitatively explore the impact of the choice of transmission scheme, cache allocation policy, proxy cache size, and availability of unicast versus multicast capability, on the resultant transmission cost. Our evaluations show that even a relatively small prefix cache (10%-20% of the video repository) is sufficient to realize substantial savings in transmission cost. We find that carefully designed proxy-assisted reactive transmission schemes can produce significant cost savings even in predominantly unicast environments such as the Internet

    Queue stability analysis in network coded wireless multicast.

    Get PDF
    In this dissertation queue stability in wireless multicast networks with packet erasure channels is studied. Our focus is on optimizing packet scheduling so as to maximize throughput. Specifically, new queuing strategies consisting of several sub-queues are introduced, where all newly arrived packets are first stored in the main sub-queue on a first-come-first-served basis. Using the receiver feedback, the transmitter combines packets from different sub-queues for transmission. Our objective is to maximize the input rate under the queue stability constraints. Two packet scheduling and encoding algorithms have been developed. First, the optimization problem is formulated as a linear programming (LP) problem, according to which a network coding based optimal packet scheduling scheme is obtained. Second, the Lyapunov optimization model is adopted and decision variables are defined to derive a network coding based packet scheduling algorithm, which has significantly less complexity and smaller queue backlog compared with the LP solution. Further, an extension of the proposed algorithm is derived to meet the requirements of time-critical data transmission, where each packet expires after a predefined deadline and then dropped from the system. To minimize the average transmission power, we further derive a scheduling policy that simultaneously minimizes both power and queue size, where the transmitter may choose to be idle to save energy consumption. Moreover, a redundancy in the schedules is inadvertently revealed by the algorithm. By detecting and removing the redundancy we further reduce the system complexity. Finally, the simulation results verify the effectiveness of our proposed algorithms over existing works

    Data Movement Challenges and Solutions with Software Defined Networking

    Get PDF
    With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. Interaction with such remote resources for the operation of media-rich applications in mobile environments is also on the rise. As a result, the performance of the underlying network infrastructure can have a significant impact on the quality of service experienced by the user. Despite receiving significant attention from both academia and industry, computer networks still face a number of challenges. Users oftentimes report and complain about poor experiences with their devices and applications, which can oftentimes be attributed to network performance when downloading or uploading application data. This dissertation investigates problems that arise with data movement across computer networks and proposes novel solutions to address these issues through software defined networking (SDN). SDN is lauded to be the paradigm of choice for next generation networks. While academia explores use cases in various contexts, industry has focused on data center and wide area networks. There is a significant range of complex and application-specific network services that can potentially benefit from SDN, but introduction and adoption of such solutions remains slow in production networks. One impeding factor is the lack of a simple yet expressive enough framework applicable to all SDN services across production network domains. Without a uniform framework, SDN developers create disjoint solutions, resulting in untenable management and maintenance overhead. The SDN-based solutions developed in this dissertation make use of a common agent-based approach. The architecture facilitates application-oriented SDN design with an abstraction composed of software agents on top of the underlying network. There are three key components modern and future networks require to deliver exceptional data transfer performance to the end user: (1) user and application mobility, (2) high throughput data transfer, and (3) efficient and scalable content distribution. Meeting these key components will not only ensure the network can provide robust and reliable end-to-end connectivity, but also that network resources will be used efficiently. First, mobility support is critical for user applications to maintain connectivity to remote, cloud-based resources. Today\u27s network users are frequently accessing such resources while on the go, transitioning from network to network with the expectation that their applications will continue to operate seamlessly. As users perform handovers between heterogeneous networks or between networks across administrative domains, the application becomes responsible for maintaining or establishing new connections to remote resources. Although application developers often account for such handovers, the result is oftentimes visible to the user through diminished quality of service (e.g. rebuffering in video streaming applications). Many intra-domain handover solutions exist for handovers in WiFi and cellular networks, such as mobile IP, but they are architecturally complex and have not been integrated to form a scalable, inter-domain solution. A scalable framework is proposed that leverages SDN features to implement both horizontal and vertical handovers for heterogeneous wireless networks within and across administrative domains. User devices can select an appropriate network using an on-board virtual SDN implementation that manages available network interfaces. An SDN-based counterpart operates in the network core and edge to handle user migrations as they transition from one edge attachment point to another. The framework was developed and deployed as an extension to the Global Environment for Network Innovations (GENI) testbed; however, the framework can be deployed on any OpenFlow enabled network. Evaluation revealed users can maintain existing application connections without breaking the sockets and requiring the application to recover. Second, high throughput data transfer is essential for user applications to acquire large remote data sets. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower Transmission Control Protocol (TCP) throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software and/or network stacks at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. This results in high throughput data transfer that is available to only a select subset of network users who have access to such specialized software. An SDN based solution called Steroid OpenFlow Service (SOS) has been proposed as a network service that transparently increases the throughput of TCP-based data transfers across large networks. SOS shifts the complexity of high performance data transfer from the end user to the network; users do not need to configure anything on the client and server machines participating in the data transfer. The SOS architecture supports seamless high performance data transfer at scale for multiple users and for high bandwidth connections. Emphasis is placed on the use of SOS as a part of a larger, richer data transfer ecosystem, complementing and compounding the efforts of existing data transfer solutions. Non-TCP-based solutions, such as Aspera, can operate seamlessly alongside an SOS deployment, while those based on TCP, such as wget, curl, and GridFTP, can leverage SOS for throughput improvement beyond what a single TCP connection can provide. Through extensive evaluation in real-world environments, the SOS architecture is proven to be flexibly deployable on a variety of network architectures, from cloud-based, to production networks, to scaled up, high performance data center environments. Evaluation showed that the SOS architecture scales linearly through the addition of SOS “agents†to the SOS deployment, providing data transfer performance improvement to multiple users simultaneously. An individual data transfer enhanced by SOS was shown to have increased throughput nearly forty times the same data transfer without SOS assistance. Third, efficient and scalable video content distribution is imperative as the demand for multimedia content over the Internet increases. Current state of the art solutions consist of vast content distribution networks (CDNs) where content is oftentimes hosted in duplicate at various geographically distributed locations. Although CDNs are useful for the dissemination of static content, they do not provide a clear and scalable model for the on demand production and distribution of live, streaming content. IP multicast is a popular solution to scalable video content distribution; however, it is seldom used due to deployment and operational complexity. Inspired from the distributed design of todays CDNs and the distribution trees used by IP multicast, a SDN based framework called GENI Cinema (GC) is proposed to allow for the distribution of live video content at scale. GC allows for the efficient management and distribution of live video content at scale without the added architectural complexity and inefficiencies inherent to contemporary solutions such as IP multicast. GC has been deployed as an experimental, nation-wide live video distribution service using the GENI network, broadcasting live and prerecorded video streams from conferences for remote attendees, from the classroom for distance education, and for live sporting events. GC clients can easily and efficiently switch back and forth between video streams with improved switching latency latency over cable, satellite, and other live video providers. The real world dep loyments and evaluation of the proposed solutions show how SDN can be used as a novel way to solve current data transfer problems across computer networks. In addition, this dissertation is expected to provide guidance for designing, deploying, and debugging SDN-based applications across a variety of network topologies

    Flexible Application-Layer Multicast in Heterogeneous Networks

    Get PDF
    This work develops a set of peer-to-peer-based protocols and extensions in order to provide Internet-wide group communication. The focus is put to the question how different access technologies can be integrated in order to face the growing traffic load problem. Thereby, protocols are developed that allow autonomous adaptation to the current network situation on the one hand and the integration of WiFi domains where applicable on the other hand

    Discovery and Group Communication for Constrained Internet of Things Devices using the Constrained Application Protocol

    Get PDF
    The ubiquitous Internet is rapidly spreading to new domains. This expansion of the Internet is comparable in scale to the spread of the Internet in the ’90s. The resulting Internet is now commonly referred to as the Internet of Things (IoT) and is expected to connect about 50 billion devices by the year 2020. This means that in just five years from the time of writing this PhD the number of interconnected devices will exceed the number of humans by sevenfold. It is further expected that the majority of these IoT devices will be resource constrained embedded devices such as sensors and actuators. Sensors collect information about the physical world and inject this information into the virtual world. Next processing and reasoning can occur and decisions can be taken to enact upon the physical world by injecting feedback to actuators. The integration of embedded devices into the Internet introduces new challenges, since many of the existing Internet technologies and protocols were not designed for this class of constrained devices. These devices are typically optimized for low cost and power consumption and thus have very limited power, memory, and processing resources and have long sleep periods. The networks formed by these embedded devices are also constrained and have different characteristics than those typical in todays Internet. These constrained networks have high packet loss, low throughput, frequent topology changes and small useful payload sizes. They are referred to as LLN. Therefore, it is in most cases unfeasible to run standard Internet protocols on this class of constrained devices and/or LLNs. New or adapted protocols that take into consideration the capabilities of the constrained devices and the characteristics of LLNs, are required. In the past few years, there were many efforts to enable the extension of the Internet technologies to constrained devices. Initially, most of these efforts were focusing on the networking layer. However, the expansion of the Internet in the 90s was not due to introducing new or better networking protocols. It was a result of introducing the World Wide Web (WWW), which made it easy to integrate services and applications. One of the essential technologies underpinning the WWW was the Hypertext Transfer Protocol (HTTP). Today, HTTP has become a key protocol in the realization of scalable web services building around the Representational State Transfer (REST) paradigm. The REST architectural style enables the realization of scalable and well-performing services using uniform and simple interfaces. The availability of an embedded counterpart of HTTP and the REST architecture could boost the uptake of the IoT. Therefore, more recently, work started to allow the integration of constrained devices in the Internet at the service level. The Internet Engineering Task Force (IETF) Constrained RESTful Environments (CoRE) working group has realized the REST architecture in a suitable form for the most constrained nodes and networks. To that end the Constrained Application Protocol (CoAP) was introduced, a specialized RESTful web transfer protocol for use with constrained networks and nodes. CoAP realizes a subset of the REST mechanisms offered by HTTP, but is optimized for Machine-to-Machine (M2M) applications. This PhD research builds upon CoAP to enable a better integration of constrained devices in the IoT and examines proposed CoAP solutions theoretically and experimentally proposing alternatives when appropriate. The first part of this PhD proposes a mechanism that facilitates the deployment of sensor networks and enables the discovery, end-to-end connectivity and service usage of newly deployed sensor nodes. The proposed approach makes use of CoAP and combines it with Domain Name System (DNS) in order to enable the use of userfriendly Fully Qualified Domain Names (FQDNs) for addressing sensor nodes. It includes the automatic discovery of sensors and sensor gateways and the translation of HTTP to CoAP, thus making the sensor resources globally discoverable and accessible from any Internet-connected client using either IPv6 addresses or DNS names both via HTTP or CoAP. As such, the proposed approach provides a feasible and flexible solution to achieve hierarchical self-organization with a minimum of pre-configuration. By doing so we minimize costly human interventions and eliminate the need for introducing new protocols dedicated for the discovery and organization of resources. This reduces both cost and the implementation footprint on the constrained devices. The second, larger, part of this PhD focuses on using CoAP to realize communication with groups of resources. In many IoT application domains, sensors or actuators need to be addressed as groups rather than individually, since individual resources might not be sufficient or useful. A simple example is that all lights in a room should go on or off as a result of the user toggling the light switch. As not all IoT applications may need group communication, the CoRE working group did not include it in the base CoAP specification. This way the base protocol is kept as efficient and as simple as possible so it would run on even the most constrained devices. Group communication and other features that might not be needed by all devices are standardized in a set of optional separate extensions. We first examined the proposed CoAP extension for group communication, which utilizes Internet Protocol version 6 (IPv6) multicasts. We highlight its strengths and weaknesses and propose our own complementary solution that uses unicast to realize group communication. Our solution offers capabilities beyond simple group communication. For example, we provide a validation mechanism that performs several checks on the group members, to make sure that combining them together is possible. We also allow the client to request that results of the individual members are processed before they are sent to the client. For example, the client can request to obtain only the maximum value of all individual members. Another important optional extension to CoAP allows clients to continuously observe resources by registering their interest in receiving notifications from CoAP servers once there are changes to the values of the observed resources. By using this publish/subscribe mechanism the client does not need to continuously poll the resource to find out whether it has changed its value. This typically leads to more efficient communication patterns that preserve valuable device and LLN resources. Unfortunately CoAP observe does not work together with the CoAP group communication extension, since the observe extension assumes unicast communication while the group communication extension only support multicast communication. In this PhD we propose to extend our own group communication solution to offer group observation capabilities. By combining group observation with group processing features, it becomes possible to notify the client only about certain changes to the observed group (e.g., the maximum value of all group members has changed). Acknowledging that the use of multicast as well as unicast has strengths and weaknesses we propose to extend our unicast based solution with certain multicast features. By doing so we try to combine the strengths of both approaches to obtain a better overall group communication that is flexible and that can be tailored according to the use case needs. Together, the proposed mechanisms represent a powerful and comprehensive solution to the challenging problem of group communication with constrained devices. We have evaluated the solutions proposed in this PhD extensively and in a variety of forms. Where possible, we have derived theoretical models and have conducted numerous simulations to validate them. We have also experimentally evaluated those solutions and compared them with other proposed solutions using a small demo box and later on two large scale wireless sensor testbeds and under different test conditions. The first testbed is located in a large, shielded room, which allows testing under controlled environments. The second testbed is located inside an operational office building and thus allows testing under normal operation conditions. Those tests revealed performance issues and some other problems. We have provided some solutions and suggestions for tackling those problems. Apart from the main contributions, two other relevant outcomes of this PhD are described in the appendices. In the first appendix we review the most important IETF standardization efforts related to the IoT and show that with the introduction of CoAP a complete set of standard protocols has become available to cover the complete networking stack and thus making the step from the IoT into the Web of Things (WoT). Using only standard protocols makes it possible to integrate devices from various vendors into one bigWoT accessible to humans and machines alike. In the second appendix, we provide an alternative solution for grouping constrained devices by using virtualization techniques. Our approach focuses on the objects, both resource-constrained and non-constrained, that need to cooperate by integrating them into a secured virtual network, named an Internet of Things Virtual Network or IoT-VN. Inside this IoT-VN full end-to-end communication can take place through the use of protocols that take the limitations of the most resource-constrained devices into account. We describe how this concept maps to several generic use cases and, as such, can constitute a valid alternative approach for supporting selected applications

    Path signalling in a wireless back-haul network integrating unidirectional broadcast technologies

    Get PDF
    The black-haul infrastructures of today's wireless operators must support the triple-play services demanded by the market or regulatory bodies. To cope with increasing capacity demand, in our previous work, we have developed a cost-effective heterogeneous layer 2.5 wireless back-haul (WiBACK) architecture, which leverages the native multicast capabilities of broadcast technologies such as DVB to off-load high-bandwidth broadcast content delivery. Furthermore, our architecture provides support for unidirectional technologies on the data and the control plane. This adopts a centralized coordinator approach, in which coordinator nodes install so-called management and data pipes. No routing state is kept at plain WiBACK nodes, which merely store QoS-aware pipe forwarding state. Consequently, the architecture requires a reliable protocol to push resource allocation and pipe forwarding state into the network, considering possibly unidirectional connectivity. Such a protocol, whose task is related to MPLS label distribution, is essential during the initial forming of WiBACK topologies and during regular network operations to reliably manage the data pipes. In this paper, we present a novel approach to extend our IEEE 802.21-inspired WiBACK TransportService and, based upon this, the design of an RSVP-TE-style pipe signalling protocol using nested hop-by-hop request/response MIH transactions that supports signalling over unidirectional technologies. A thorough evaluation and successful testbed deployments show that this protocol reliably signals pipe state even under high loss conditions
    • …
    corecore