14 research outputs found

    TCP Goes to Hollywood

    Get PDF
    Real-time multimedia applications use either TCP or UDP at the transport layer, yet neither of these protocols offer all of the features required. Deploying a new protocol that does offer these features is made difficult by ossification: firewalls, and other middleboxes, in the network expect TCP or UDP, and block other types of traffic. We present TCP Hollywood, a protocol that is wire-compatible with TCP, while offering an unordered, partially reliable messageoriented transport service that is well suited to multimedia applications. Analytical results show that TCP Hollywood extends the feasibility of using TCP for real-time multimedia applications, by reducing latency and increasing utility. Preliminary evaluations also show that TCP Hollywood is deployable on the public Internet, with safe failure modes. Measurements across all major UK fixed-line and cellular networks validate the possibility of deployment

    Architectures and dynamic bandwidth allocation algorithms for next generation optical access networks

    Get PDF

    Model based analysis of some high speed network issues

    Get PDF
    The study of complex problems in science and engineering today typically involves large scale data, huge number of large-scale scientific breakthroughs critically depends on large multi-disciplinary and geographically-dispersed research teams, where the high speed network becomes the integral part. To serve the ongoing bandwidth requirement and scalability of these networks, there has been a continuous evolution of different TCPs for high speed networks. Testing these protocols on a real network would be expensive, time consuming and more over not easily available to the researchers worldwide. Network simulation is well accepted and widely used method for performance evaluation, it is well known that packet-based simulators like NS2 and Opnet are not adequate in high speed also in large scale networks because of its inherent bottlenecks in terms of message overhead and execution time. In that case model based approach with the help of a set of coupled differential equations is preferred for simulations. This dissertation is focused on the key challenges on research and development of TCPs on high-speed network. To address these issues/challenges this thesis has three objectives: design an analytical simulation methodology; model behaviors of high speed networks and other components including TCP flows and queue using the analytical simulation method; analyze them and explore impacts and interrelationship among them. To decrease the simulation time and speed up the process of testing and development of high speed TCP, we present a scalable simulation methodology for high speed network. We present the fluid model equations for various high-speed TCP variants. With the help of these fluid model equations, the behavior of high-speed TCP variants under various scenarios and its effect on queue size variations are presented. High speed network is not feasible unless we understand effect of bottleneck buffer size on performance of these high-speed TCP variants. A fluid model is introduced to accommodate the new observations of synchronization and de-synchronization phenomena of packet losses at bottleneck link and a microscopic analysis is presented on different buffer sizes at drop-tail queuing scheme. The proposed model based methods promotes principal understanding of the future heterogeneous networks and accelerates protocol developments

    Adaptive delay-constrained internet media transport

    Get PDF
    Reliable transport layer Internet protocols do not satisfy the requirements of packetized, real-time multimedia streams. The available thesis motivates and defines predictable reliability as a novel, capacity-approaching transport paradigm, supporting an application-specific level of reliability under a strict delay constraint. This paradigm is being implemented into a new protocol design -- the Predictably Reliable Real-time Transport protocol (PRRT). In order to predictably achieve the desired level of reliability, proactive and reactive error control must be optimized under the application\u27s delay constraint. Hence, predictably reliable error control relies on stochastic modeling of the protocol response to the modeled packet loss behavior of the network path. The result of the joined modeling is periodically evaluated by a reliability control policy that validates the protocol configuration under the application constraints and under consideration of the available network bandwidth. The adaptation of the protocol parameters is formulated into a combinatorial optimization problem that is solved by a fast search algorithm incorporating explicit knowledge about the search space. Experimental evaluation of PRRT in real Internet scenarios demonstrates that predictably reliable transport meets the strict QoS constraints of high-quality, audio-visual streaming applications.Zuverlässige Internet-Protokolle auf Transport-Layer erfüllen nicht die Anforderungen paketierter Echtzeit-Multimediaströme. Die vorliegende Arbeit motiviert und definiert Predictable Reliability als ein neuartiges, kapazitäterreichendes Transport-Paradigma, das einen anwendungsspezifischen Grad an Zuverlässigkeit unter strikter Zeitbegrenzung unterstützt. Dieses Paradigma wird in ein neues Protokoll-Design implementiert -- das Predictably Reliable Real-time Transport Protokoll (PRRT). Um prädizierbar einen gewünschten Grad an Zuverlässigkeit zu erreichen, müssen proaktive und reaktive Maßnahmen zum Fehlerschutz unter der Zeitbegrenzung der Anwendung optimiert werden. Daher beruht Fehlerschutz mit Predictable Reliability auf der stochastischen Modellierung des Protokoll-Verhaltens unter modelliertem Paketverlust-Verhalten des Netzwerkpfades. Das Ergebnis der kombinierten Modellierung wird periodisch durch eine Reliability Control Strategie ausgewertet, die die Konfiguration des Protokolls unter den Begrenzungen der Anwendung und unter Berücksichtigung der verfügbaren Netzwerkbandbreite validiert. Die Adaption der Protokoll-Parameter wird durch ein kombinatorisches Optimierungsproblem formuliert, welches von einem schnellen Suchalgorithmus gelöst wird, der explizites Wissen über den Suchraum einbezieht. Experimentelle Auswertung von PRRT in realen Internet-Szenarien demonstriert, dass Transport mit Predictable Reliability die strikten Auflagen hochqualitativer, audiovisueller Streaming-Anwendungen erfüllt

    Técnicas de optimización de parámetros de red para la mejora de la calidad de servicio en servicios IP

    Get PDF
    Esta tesis doctoral presenta las contribuciones realizadas en la implementación de un sistema 5G que proporciona mecanismos de gestión, orquestación y monitorización. Dicho sistema brinda la posibilidad de desplegar diferentes escenarios como sistemas Internet of Things, Internet of Skills, Video Survilleance Systems y/o Internet of Video Things. La arquitectura propuesta colabora en la gestión dinámica de aplicaciones en un conjunto de nodos distribuidos. Además, se realiza una prueba de concepto implementando un sistema de videovigilancia inteligente en vehículos de transporte público basado en dispositivos de Internet of Things.Por otro lado, el presente trabajo proporciona una metodología que brinda un procedimiento a seguir para modificar la forma de cómo el tráfico de servicios que generan ráfagas de paquetes pequeños es enviado a la red. La idea fundamental es que esta metodología sea aplicada en los nodos de cómputo en el borde de la nube, como si se tratase de funciones de red, aplicando el concepto de Edge Cloud. De esta manera, se pretende minimizar la congestión en los buffer de los dispositivos de acceso, optimizándose el tráfico de dichos servicios y por ende, mejorándose la experiencia del usuario final. En este contexto, se estudian dos métodos de optimización de tráfico: la multiplexión de varios paquetes pequeños en uno más grande, y el alisado, que reduce los picos de throughput. <br /

    Data Movement Challenges and Solutions with Software Defined Networking

    Get PDF
    With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. Interaction with such remote resources for the operation of media-rich applications in mobile environments is also on the rise. As a result, the performance of the underlying network infrastructure can have a significant impact on the quality of service experienced by the user. Despite receiving significant attention from both academia and industry, computer networks still face a number of challenges. Users oftentimes report and complain about poor experiences with their devices and applications, which can oftentimes be attributed to network performance when downloading or uploading application data. This dissertation investigates problems that arise with data movement across computer networks and proposes novel solutions to address these issues through software defined networking (SDN). SDN is lauded to be the paradigm of choice for next generation networks. While academia explores use cases in various contexts, industry has focused on data center and wide area networks. There is a significant range of complex and application-specific network services that can potentially benefit from SDN, but introduction and adoption of such solutions remains slow in production networks. One impeding factor is the lack of a simple yet expressive enough framework applicable to all SDN services across production network domains. Without a uniform framework, SDN developers create disjoint solutions, resulting in untenable management and maintenance overhead. The SDN-based solutions developed in this dissertation make use of a common agent-based approach. The architecture facilitates application-oriented SDN design with an abstraction composed of software agents on top of the underlying network. There are three key components modern and future networks require to deliver exceptional data transfer performance to the end user: (1) user and application mobility, (2) high throughput data transfer, and (3) efficient and scalable content distribution. Meeting these key components will not only ensure the network can provide robust and reliable end-to-end connectivity, but also that network resources will be used efficiently. First, mobility support is critical for user applications to maintain connectivity to remote, cloud-based resources. Today\u27s network users are frequently accessing such resources while on the go, transitioning from network to network with the expectation that their applications will continue to operate seamlessly. As users perform handovers between heterogeneous networks or between networks across administrative domains, the application becomes responsible for maintaining or establishing new connections to remote resources. Although application developers often account for such handovers, the result is oftentimes visible to the user through diminished quality of service (e.g. rebuffering in video streaming applications). Many intra-domain handover solutions exist for handovers in WiFi and cellular networks, such as mobile IP, but they are architecturally complex and have not been integrated to form a scalable, inter-domain solution. A scalable framework is proposed that leverages SDN features to implement both horizontal and vertical handovers for heterogeneous wireless networks within and across administrative domains. User devices can select an appropriate network using an on-board virtual SDN implementation that manages available network interfaces. An SDN-based counterpart operates in the network core and edge to handle user migrations as they transition from one edge attachment point to another. The framework was developed and deployed as an extension to the Global Environment for Network Innovations (GENI) testbed; however, the framework can be deployed on any OpenFlow enabled network. Evaluation revealed users can maintain existing application connections without breaking the sockets and requiring the application to recover. Second, high throughput data transfer is essential for user applications to acquire large remote data sets. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower Transmission Control Protocol (TCP) throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software and/or network stacks at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. This results in high throughput data transfer that is available to only a select subset of network users who have access to such specialized software. An SDN based solution called Steroid OpenFlow Service (SOS) has been proposed as a network service that transparently increases the throughput of TCP-based data transfers across large networks. SOS shifts the complexity of high performance data transfer from the end user to the network; users do not need to configure anything on the client and server machines participating in the data transfer. The SOS architecture supports seamless high performance data transfer at scale for multiple users and for high bandwidth connections. Emphasis is placed on the use of SOS as a part of a larger, richer data transfer ecosystem, complementing and compounding the efforts of existing data transfer solutions. Non-TCP-based solutions, such as Aspera, can operate seamlessly alongside an SOS deployment, while those based on TCP, such as wget, curl, and GridFTP, can leverage SOS for throughput improvement beyond what a single TCP connection can provide. Through extensive evaluation in real-world environments, the SOS architecture is proven to be flexibly deployable on a variety of network architectures, from cloud-based, to production networks, to scaled up, high performance data center environments. Evaluation showed that the SOS architecture scales linearly through the addition of SOS “agents†to the SOS deployment, providing data transfer performance improvement to multiple users simultaneously. An individual data transfer enhanced by SOS was shown to have increased throughput nearly forty times the same data transfer without SOS assistance. Third, efficient and scalable video content distribution is imperative as the demand for multimedia content over the Internet increases. Current state of the art solutions consist of vast content distribution networks (CDNs) where content is oftentimes hosted in duplicate at various geographically distributed locations. Although CDNs are useful for the dissemination of static content, they do not provide a clear and scalable model for the on demand production and distribution of live, streaming content. IP multicast is a popular solution to scalable video content distribution; however, it is seldom used due to deployment and operational complexity. Inspired from the distributed design of todays CDNs and the distribution trees used by IP multicast, a SDN based framework called GENI Cinema (GC) is proposed to allow for the distribution of live video content at scale. GC allows for the efficient management and distribution of live video content at scale without the added architectural complexity and inefficiencies inherent to contemporary solutions such as IP multicast. GC has been deployed as an experimental, nation-wide live video distribution service using the GENI network, broadcasting live and prerecorded video streams from conferences for remote attendees, from the classroom for distance education, and for live sporting events. GC clients can easily and efficiently switch back and forth between video streams with improved switching latency latency over cable, satellite, and other live video providers. The real world dep loyments and evaluation of the proposed solutions show how SDN can be used as a novel way to solve current data transfer problems across computer networks. In addition, this dissertation is expected to provide guidance for designing, deploying, and debugging SDN-based applications across a variety of network topologies

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    SIMULATION OF A MULTIPROCESSOR COMPUTER SYSTEM

    Get PDF
    The introduction of computers and software engineering in telephone switching systems has dictated the need for powerful design aids for such complex systems. Among these design aids simulators - real-time environment simulators and flat-level simulators - have been found particularly useful in stored program controlled switching systems design and evaluation. However, both types of simulators suffer from certain disadvantages. An alternative methodology to the simulation of stored program controlled switching systems is proposed in this research. The methodology is based on the development of a process-based multilevel hierarchically structured software simulator. This methodology eliminates the disadvantages of environment and flat-level simulators. It enables the modelling of the system in a 1 to 1 transformation process retaining the sub-systems interfaces and, hence, making it easier to see the resemblance between the model and modelled system and to incorporate design modifications and/or additions in the simulator. This methodology has been applied in building a simulation package for the System X family of exchanges. The Processor Utility Sub-system used to control the exchanges is first simulated, verified and validated. The application sub-systems models are then added one level higher_, resulting in an open-ended simulator having sub-systems models at different levels of detail and capable of simulating any member of the System X family of exchanges. The viability of the methodology is demonstrated by conducting experiments to tune the real-time operating system and by simulating a particular exchange - The Digital Main Network Switching Centre - in order to determine its performance characteristics.The General Electric Company Ltd, GEC Hirst Research Cent, Wemble

    IMPROVING QoS OF VoWLAN VIA CROSS-LAYER BASED ADAPTIVE APPROACH

    Get PDF
    Voice over Internet Protocol (VoIP) is a technology that allows the transmission of voice packets over Internet Protocol (IP). Recently, the integration of VoIP and Wireless Local Area Network (WLAN), and known as Voice over WLAN (VoWLAN), has become popular driven by the mobility requirements ofusers, as well as by factor of its tangible cost effectiveness. However, WLAN network architecture was primarily designed to support the transmission of data, and not for voice traffic, which makes it lack ofproviding the stringent Quality ofService (QoS) for VoIP applications. On the other hand, WLAN operates based on IEEE 802.11 standards that support Link Adaptive (LA) technique. However, LA leads to having a network with multi-rate transmissions that causes network bandwidth variation, which hence degrades the voice quality. Therefore, it is important to develop an algorithm that would be able to overcome the negative effect of the multi-rate issue on VoIP quality. Hence, the main goal ofthis research work is to develop an agent that utilizes IP protocols by applying a Cross-Layering approach to eliminate the above-mentioned negative effect. This could be expected from the interaction between Medium Access Control (MAC) layer and Application layer, where the proposed agent adapts the voice packet size at the Application layer according to the change of MAC transmission data rate to avoid network congestion from happening. The agent also monitors the quality of conversations from the periodically generated Real Time Control Protocol (RTCP) reports. If voice quality degradation is detected, then the agent performs further rate adaptation to improve the quality. The agent performance has been evaluated by carrying out an extensive series ofsimulation using OPNET Modeler. The obtained results of different performance parameters are presented, comparing the performance ofVoWLAN that used the proposed agent to that ofthe standard network without agent. The results ofall measured quality parameters hav
    corecore