6 research outputs found

    More is Better? Measurement of MPTCP based Cellular Bandwidth Aggregation in the Wild

    Get PDF
    4G/3G networks have been widely deployed around the world to provide high wireless bandwidth for mobile users. However, the achievable 3G/4G bandwidth is still much lower than their theoretic maximum. Signal strengths and available backhaul capacities may vary significantly at different locations and times, often leading to unsatisfactory performance. Bandwidth aggregation, which uses multiple interfaces concurrently for data transfer, is a readily deployable solution. Specifically, Multi-Path TCP (MPTCP) has been advocated as a promising approach for leveraging multiple source-destination paths simultaneously in the transport layer. In this paper, we investigate the efficiency of an MPTCP-based bandwidth aggregation framework based on extensive measurements. In particular, we evaluate the gain for bandwidth aggregation across up to 4 cellular operators’ networks, with respect to factors such as time, user location, data size, aggregation proxy location and congestion control algorithm. Our measurement studies reveal that (1) bandwidth aggregation in general improves the cellular network bandwidth experienced by mobile users, but the performance gain is significant only for bandwidth-intensive delay-tolerant flows; (2) the effectiveness of aggregation depends on many network factors, including QoS of individual cellular interfaces and the location of aggregation proxy; (3) contextual factors, including the time of day and the mobility of a user, also affect the aggregation performance.postprin

    A network-layer proxy for bandwidth aggregation and reduction of IP packet reordering

    No full text
    Abstract—With today’s widespread deployment of wireless technologies, it is often the case that a single communication device can select from a variety of access networks. At the same time, there is an ongoing trend towards integration of multiple network interfaces into end-hosts, such as cell phones with HSDPA, Bluetooth and WLAN. By using multiple Internet connections concurrently, network applications can benefit from aggregated bandwidth and increased fault tolerance. However, the heterogeneity of wireless environments introduce challenges with respect to implementation, deployment, and protocol compatibility. Variable link characteristics cause reordering when sending IP packets of the same flow over multiple paths. This paper introduces a multilink proxy that is able to transparently stripe traffic destined for multihomed clients. Operating on the network layer, the proxy uses path monitoring statistics to adapt to changes in throughput and latency. Experimental results obtained from a proof-of-concept implementation verify that our approach is able to fully aggregate the throughput of heterogeneous downlink streams, even if the path characteristics change over time. In addition, our novel method of equalizing delays by buffering packets on the proxy significantly reduces IP packet reordering and the buffer requirements of clients. Index Terms—Wireless networks, heterogeneous systems, traffic analysis, network protocols, scheduling, measurements

    Practical Multi-Interface Network Access for Mobile Devices

    Get PDF
    Despite the growing number of mobile devices equipped with multiple networking interfaces, they are not using multiple available networks in parallel. The simple network selection techniques only allow for single network to be used at a time and switching between different networks interrupts all existing connections. This work presents system that improves network connectivity in presence of multiple network adapters, not only through better network handovers, smarter network selection and failure detection, but also through increased bandwidth offered to the device over aggregated channels. The biggest challenge such a system has to face is the heterogeneity of networks in mobile environment. Different wireless technologies, and even different networks of the same type offer inconsistent link parameters like available bandwidth, latency or packet loss. The wireless nature of these networks also means, that most of the parameters fluctuate in unpredictable way. Given the intended practicality of designed system, all that complexity has to be hidden from both client-side applications and from the remote servers. These factors combined make the task of designing and implementing an efficient solution difficult. The system incorporates client-side software, as well as network proxy that assists in splitting data traffic, tunnelling it over a number of available network interfaces, and reassembling it on the remote side. These operations are transparent to both applications running on the client, as well as any network servers those applications communicate with. This property allows the system to meet one of the most important requirements, which is the practicality of the solution, and being able to deploy it in real life scenarios, using network protocols available today and on existing devices. This work also studies the most critical cost associated with increased data processing and parallel interface usage - the increase in energy usage, which needs to remain within reasonable values for this kind of solution being usable on mobile devices with limited battery life. The properties of designed and deployed system are evaluated using multiple experiments in different scenarios. Collected results confirm that our approach can provide applications with increased bandwidth when multiple networks are available. We also discover that even though per-second energy usage increases when multiple interfaces are used in parallel, the use of multi-interface connectivity can actually reduce the total energy cost associated with performing specific tasks - effectively saving energy

    Concurrent multipath transmission to improve performance for multi-homed devices in heterogeneous networks

    Get PDF
    Recent network technology developments have led to the emergence of a variety of access network technologies - such as IEEE 802.11, wireless local area network (WLAN), IEEE 802.16, Worldwide Interoperability for Microwave Access (WIMAX) and Long Term Evolution (LTE) - which can be integrated to offer ubiquitous access in a heterogeneous network environment. User devices also come equipped with multiple network interfaces to connect to the different network technologies, making it possible to establish multiple network paths between end hosts. However, the current connectivity settings confine the user devices to using a single network path at a time, leading to low utilization of the resources in a heterogeneous network and poor performance for demanding applications, such as high definition video streaming. The simultaneous use of multiple network interfaces, also called bandwidth aggregation, can increase application throughput and reduce the packets' end-to-end delays. However, multiple independent paths often have heterogeneous characteristics in terms of offered bandwidth, latency and loss rate, making it challenging to achieve efficient bandwidth aggregation. For instance, striping the flow's packets over multiple network paths with different latencies can cause packet reordering, which can significantly degrade performance of the current transport protocols. This thesis proposes three new solutions to mitigate the effects of network path heterogeneity on the performance of various concurrent multipath transmission settings. First, a network layer solution is proposed to stripe packets of delay-sensitive and high-bandwidth applications for concurrent transmission across multiple network paths. The solution leverages the paths' latency heterogeneity to reduce packet reordering, leading to minimal reordering delay, which improves performance of delay-sensitive applications. Second, multipath video streaming is developed for H.264 scalable video, where the reference video packets are adaptively assigned to low loss network paths to reduce drifting errors, thus combatting H.264 video distortion effectively. Finally, a new segment scheduling framework - which carefully considers path heterogeneity - is incorporated into the IETF Multipath TCP to improve throughput performance. The proposed solutions have been validated using a series of simulation experiments. The results reveal that the proposed solutions can enable efficient bandwidth aggregation for concurrent multipath transmission over heterogeneous network paths

    Partage d'infrastructures et convergence fixe/mobile dans les réseaux 3GPP de prochaine génération

    Get PDF
    RÉSUMÉ Le déploiement de la technologie cellulaire de quatrième génération a débuté par quelques projets pilotes, notamment en Suède et en Norvège, dans la première moitié de 2010. Ces réseaux offrent dans un premier temps l’accès à Internet uniquement et comptent sur les réseaux de deuxième et troisième génération existants pour le support de la téléphonie et de la messagerie texte. Ce ne sera donc qu’avec l’avènement du IP Multimedia Subsystem (IMS) que tous les services seront supportés par la nouvelle architecture basée entièrement sur IP. Les réseaux mobiles de quatrième génération promettent aux usagers des taux de transfert au-delà de 100 Mbits/s en amont, lorsque l’usager est immobile, et le support de la qualité de service permettant d’offrir des garanties de débit, délai maximum, gigue maximale et d’un taux de perte de paquets borné supérieurement. Ces réseaux supporteront efficacement les applications utilisant la géolocalisation afin d’améliorer l’expérience de l’usager. Les terminaux d’aujourd’hui offrent un éventail de technologies radio. En effet, en plus du modem cellulaire, les terminaux supportent souvent la technologie Bluetooth qui est utilisée pour connecter entre autres les dispositifs mains-libres et les écouteurs. De plus, la majorité des téléphones cellulaires sont dotés d’un accès WiFi permettant à l’usager de transférer de grands volumes de données sans engorger le réseau cellulaire. Toutefois, cet accès n’est souvent réservé qu’au réseau résidentiel de l’usager ou à celui de son lieu de travail. Enfin, une relève verticale est presque toujours manuelle et entraîne pour le mobile un changement d’adresse IP, ce qui ultimement a pour conséquence une déconnexion des sessions en cours. Depuis quelques années, une tendance se profile au sein de l’industrie qui est connue sous de nom de convergence des réseaux fixes et mobiles. Cette tendance vise à plus ou moins long terme d’offrir l’accès Internet et la téléphonie à partir d’un seul terminal pouvant se connecter à un réseau d’accès local ou au réseau cellulaire. à ce jour, très peu d’opérateurs (e.g., NTT Docomo) offrent des terminaux ayant la possibilité de changer de point d’accès. Toutefois, le point d’accès doit appartenir à l’usager ou se situe à son lieu de travail. Par ailleurs, on remarque un mouvement de convergence selon lequel différents réseaux utilisés pour les services d’urgence (tels que la police, les pompiers et ambulanciers) sont progressivement migrés (en raison de leurs coûts prohibitifs) vers un seul réseau offrant un très haut niveau de redondance et de fiabilité. Les services d’urgence démontrent des besoins en QoS similaires à ceux des particuliers sauf qu’ils nécessitent un accès prioritaire, ce qui peut entraîner la déconnexion d’un usager non-prioritaire lors d’une situation de congestion. En plus des services publics qui tentent de réduire leurs coûts d’exploitation en partageant l’accès aux réseaux commerciaux de communications, les opérateurs de ces réseaux sont aussi entrés dans une phase de réduction de coûts. Cette situation résulte du haut niveau de maturité maintenant atteint par l’industrie des communications mobiles. Par exemple, l’image de marque ou la couverture offerte par chacun d’eux ne constituent plus en soi un argument de vente suffisant pour attirer une nouvelle clientèle. Ceux-ci doivent donc se distinguer par une offre de services supérieure à celle de leur compétition. Les opérateurs ont donc entrepris de sous-traiter des opérations non-critiques de leur entreprise afin de se concentrer sur l’aspect le plus profitable de cette dernière. Parallèlement à cette tendance, les opérateurs ont commencé à partager une portion de plus en plus importante de leurs infrastructures physiques avec leurs compétiteurs. Dans un premier temps, le partage s’est limité aux sites des stations de base et aux mâts qui supportent les antennes. Puis vint le partage des abris pour réduire les coûts de climatisation et d’hébergement des équipements. Ensuite, les opérateurs se mirent à partager les équipements radio, chacun contrôlant toutefois ses propres bandes de fréquences. . . Le partage des infrastructures physiques au-delà du premier nœud du réseau cœur n’est pas actuellement supporté en standardisation. Les propositions existantes d’architectures de réseaux de prochaine génération ont toutes comme point en commun d’être basées sur un réseau cœur tout-IP, d’offrir une QoS aux applications et une performance de l’ordre de 100 Mbits/s. De plus, ces dernières proposent des mécanismes de gestion des politiques qui définissent l’utilisation des services offerts aux abonnés ainsi que la façon de comptabiliser l’usage des ressources du réseau. On dénombre trois grandes catégories de politiques : celles se rattachant à l’usager (e.g., les abonnements or/argent/bronze, accès facturé vs. prépayé), celles qui dépendent du service demandé (e.g., pour un service donné, la bande passante maximale, la classe de service et la priorité d’allocation et de rétention des ressources) et enfin les politiques relatives à l’état du réseau (e.g., niveau de congestion, répartition des agrégats de trafic, etc). Dans un premier article dont le titre est « A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core », les aspects de FMC ainsi que du partage du réseau cœur sont traités conjointement puisqu’il faut que l’architecture PCC reflète les réalités des tendances de l’industrie décrites précédemment. Suite à la description des tendances de l’industrie furent présentés les requis d’une architecture PCC qui rendent possibles la convergence des services (capacité d’utiliser un service à partir de n’importe quel accès), le partage du réseau cœur par plusieurs opérateurs mobiles virtuels , la création de politiques propres à chaque réseau d’accès ainsi que la micro-mobilité efficace des usagers dans les scénarios d’itinérance. Dans un second temps, deux architectures de NGN furent évaluées en fonction des requis énumérés ci-dessus. Cette étude permit de déterminer qu’une solution hybride (avec les avantages de chacune mais sans leurs défauts respectifs) constituait une piste de solution prometteuse qui servit de base à notre proposition. La solution proposée atteint son but par une meilleure répartition des rôles d’affaires ainsi que par l’introduction d’une entité centrale de contrôle nommée Network Policy Function (NPF) au sein du réseau de transport IP. En effet, les rôles d’affaires définis (fournisseurs d’accès, de réseau cœur et de services) permettent la création de domaines de politiques et administratifs distincts. Ces rôles deviennent nécessaires dans les cas de partage d’infrastructures. Dans le cas contraire, ils sont compatibles avec le modèle vertical actuel d’opérateur ; ce dernier joue alors tous les rôles. Quant à l’introduction du NPF dans le réseau cœur, celui-ci permet de séparer la gestion des politiques régissant le réseau de transport IP des usagers, des services et des réseaux d’accès. De plus, il permet le partage du réseau cœur de façon à respecter les ententes de services liant ce dernier à chaque opérateur virtuel ainsi que les ententes de services liant le réseau cœur et le(s) réseau(x) d’accès. Par ailleurs, le NPF permet d’ajouter au réseau cœur des services avancés à partager entre plusieurs opérateurs. Parmi ces services, on retrouve des fonctions de transcodage audio/vidéo, des caches de fichiers (e.g., pouvant servir à la distribution de films), d’antivirus grâce à l’inspection approfondie des paquets, etc. L’avantage d’introduire ces services au niveau transport est de permettre autant aux applications IMS qu’aux autres d’en bénéficier. Le second article intitulé « A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core » constitue une extension du premier article qui décrit en détail les tendances de l’industrie, les architectures de gestion de politiques existantes et leurs caractéristiques, et enfin offrit un survol de la solution. En contre-partie, le second article aborde beaucoup plus en détail les impacts de la solution proposée sur l’architecture existante. En effet, une contribution significative de ce second article est de dresser la liste exhaustive de toutes les simplifications potentielles que permet la proposition d’architecture. La contribution majeure du second article est que la solution proposée peut être déployée immédiatement avec un minimum d’impacts. Effectivement, une petite modification à l’architecture proposée dans le premier article, au niveau des interfaces du NPF, permit cette avancée. En conséquence, cette modification réconcilie les deux variantes actuelles d’architecture basées sur les protocoles GPRS Tunneling Protocol (GTP) et Proxy Mobile IPv6 (PMIPv6). Le dernier apport important du second article est la démonstration du fonctionnement interne du NPF lorsque ce dernier contrôle un réseau de transport basé sur un mécanisme de tunnels tels que Multi-Protocol Label Switching (MPLS) ou encore Provider Backbone Bridge-Traffic Engineering (PBB-TE). Un processus d’ingénierie de trafic permet aux flux de trafic de contourner une zone de congestion, de mieux balancer la charge du réseau et d’assurer que les exigences en QoS sont toujours respectées. Le troisième article intitulé « A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System » traite de QoS dans les scénarios de FMC, plus particulièrement des applications qui ne sont pas supportées par le réseau. Par exemple, toutes les applications pair-à-pair qui représentent une portion infime du volume de trafic total attribué à ce type d’application ou celles qui sont naissantes et encore méconnues. Les réseaux de deuxième et troisième générations ont été conçus de telle sorte que l’usager fournit au réseau les paramètres de QoS de l’application. Toutefois, le nombre de combinaisons des paramètres de QoS était très élevé et trop complexe à gérer. Il en résulta que pour la quatrième génération il fut décidé que dorénavant ce seraient les serveurs d’applications dans le réseau qui fourniraient ces paramètres de QoS. De même, un nombre restreint de classes de services fut défini, ce qui eut pour résultat de simplifier énormément la gestion de la QoS. Lorsque sont considérés les concepts de FMC, il devient évident que le mécanisme décrit ci-dessus ne s’applique qu’aux accès 3GPP. En effet, chaque type d’accès définit ses propres mécanismes qui doivent souvent être contrôlés par le réseau et non par l’usager. De plus, certains accès ne disposent d’aucun canal de contrôle sur lequel circule les requêtes de QoS. De même, les protocoles existants de QoS sont souvent lourds et définis de bout-en-bout ; ils ne sont donc pas appropriés à l’utilisation qui est envisagée. En conséquence, la solution proposée consiste en un nouveau protocole multiaccès de réservation de ressources. MARSVP utilise le canal de données que l’on retrouve sur tous les accès et confine les échanges de messages entre l’usager et le premier nœud IP. Les besoins en QoS sont définis en fonction des QoS Class Indicators (QCIs) ce qui rend MARSVP simple à utiliser. Suite à une requête de réservation de ressources acceptée par le réseau, ce dernier configure l’accès et retourne au terminal les informations requises à l’envoi paquets (aux couches 2 et 3).----------ABSTRACT Fourth generation cellular networks trials have begun in the first half of 2010, notably in Sweden and Norway. As a first step, these networks only offer Internet access and rely on existing second and third generation networks for providing telephony and text messaging. It’s only after the deployment of the IP Multimedia Subsystem (IMS) that all services shall be supported on the new all-IP architecture. Fourth generation mobile networks should enable end users to benefit from data throughputs of at least 100 Mbps on the downlink, when the user is stationary, and of Quality of Service (QoS) support that allows guarantees on throughput, maximum delay, maximum jitter and on the packet loss rate. These networks will efficiently support applications that rely on geolocation in order to improve the user’s Quality of Experience (QoE). Today’s terminals can communicate using several radio technologies. Indeed, in addition to the cellular modem, terminals often support the Bluetooth technology which is used for connecting handsfree devices and headsets. Moreover, most cell phones feature a Wi-Fi interface that enables users to transfer huge volumes of data without congesting the cellular network. However, Wi-Fi connectivity is often restricted to the user’s home network or his workplace. Finally, a vertical handover is nearly always done manually and forces the terminal to change its IP address, which ultimately disrupts all active data sessions. A trend has emerged a few years ago among the mobile communications industry known as Fixed-Mobile Convergence (FMC). FMC is a trend aiming to provide Internet access and telephony on a single device capable of switching between local- and wide-area networks. At this time, very few operators (e.g., NTT Docomo) offer terminals capable of switching to another access automatically. However, the access point must belong to the user or be installed in his workplace. At the same time, another kind of convergence has begun in which the dedicated networks for public safety (such as police, fire prevention and ambulances) are being progressively migrated (because of their high operational costs) toward a single highly reliable and redundant network. Indeed, these services exhibit QoS requirements that are similar to residential costumers’ except they need a prioritized access, and that can terminate a non-priority user’s session during congestion situations. In addition to the public services that seek to reduce their operational costs by sharing commercial communications networks, the network operators have also entered a cost reduction phase. This situation is a result of the high degree of maturity that the mobile communications industry has reached. As an example, the branding or the coverage offered by each of them isn’t a sufficient sales argument anymore to enroll new subscribers. Operators must now distinguish themselves from their competition with a superior service offering. Some operators have already started to outsource their less profitable business activities in order to concentrate on their key functions. As a complement to this trend, operators have begun to share an ever increasing portion of their physical infrastructures with their competitors. As a first step, infrastructure sharing was limited to the base station sites and antenna masts. Later, the shelters were shared to further reduce the cooling and hosting costs of the equipments. Then, operators started to share radio equipments but each of them operated on different frequency bands. . . Infrastructure sharing beyond the first core network node isn’t actually supported in standardization. There is an additional trend into the mobile communications industry which is the specialization of the operators (i.e., the identification of target customers by the operators). As a result, these operators experience disjoint traffic peaks because their customer bases have different behaviors. The former have a strong incentive to share infrastructures because network dimensioning mostly depends on the peak demand. Consequently, sharing infrastructures increases the average traffic load without significantly increasing the peak load because the peaks occur at different times. This allows operators to boost their return on investment. Every existing Next Generation Network (NGN) architecture proposal features an all-IP core network, offers QoS to applications and a bandwidth on the downlink in the order of 100 Mbps. Moreover, these NGNs propose a number of Policy and Charging Control (PCC) mechanisms that determine how services are delivered to the subscribers and what charging method to apply. There are three main categories of policies: those that are related to the subscriber (e.g., gold/silver/bronze subscription, prepaid vs. billed access), those that apply to services (e.g., for a given service, bandwidth limitation, QoS class assignment, allocation and retention priority of resources) and finally policies that depend on the current state of the network (e.g., congestion level, traffic engineering, etc). In a first paper entitled “A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core ”, FMC and Core Network (CN) sharing aspects are treated simultaneously because it is important that the logical PCC architecture reflects the realities of the industry trends described above. Following the description of the trends in the communications industry were presented a list of four requirements that enable for a PCC architecture: service convergence (capacity to use a service from any type of access), CN sharing that allows several Mobile Virtual Network Operators (MVNOs) to coexist, the creation of local access network policies as well as efficient micro-mobility in roaming scenarios. As a second step, two NGN architectures were evaluated upon the requirements mentioned above. This evaluation concluded that a hybrid solution (based on the key features of each architecture but without their respective drawbacks) would offer a very promising foundation for a complete solution. The proposed solution achieved its goal with a clearer separation of the business roles (e.g., access and network providers) and the introduction of a Network Policy Function (NPF) for the management of the CN. Indeed, the business roles that were defined allow the creation of distinct policy/QoS and administrative domains. The roles become mandatory in infrastructure sharing scenarios. Otherwise, they maintain the compatibility with the actual vertically-integrated operator model; the latter then plays all of the business roles. Introducing the NPF into the CN enables the CN policy management to be separated from policy management related to subscribers, services and access networks. Additionally, the NPF allows the CN to be shared by multiple Network Service Providers (NSPs) and respect the Service Level Agreements (SLAs) that link the IP Aggregation Network (IPAN) to the NSPs, as well as those that tie the IPAN to the Access Network Providers (ANPs). Another benefit of the NPF is that it can share a number of advanced functions between several NSPs. Those functions include audio/video transcoding, file caches (e.g., that can be used for multimedia content delivery), Deep Packet Inspection (DPI) antivirus, etc. The main advantage to integrate those infrastructure services at the IP transport level is to allow both IMS and non-IMS applications to benefit from them. A second paper entitled “A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core ” constitutes an extension of the first paper that extensively described the industry trends, two existing PCC architectures and their characteristics, and finally offered an overview of the proposed solution. On the other hand, the second paper thoroughly describes all of the impacts that the proposal has on the existing 3GPP PCC architecture. Indeed, a significant contribution of this second paper is that it provides an extensive list of potential simplifications that the proposed solution allows. The main contribution of the second paper is that from now on the proposed solution can be deployed over an existing PCC architecture with a minimum of impacts. Indeed, a small modification to the NPF’s reference points enables this enhancement. As a consequence, this enhancement provided a solution that is compatible with both PCC architecture variants, based on either GPRS Tunneling Protocol (GTP) or Proxy Mobile IPv6 (PMIPv6). A last contribution of the second paper is to demonstrate the NPF’s internals when the former is controlling a an IPAN based on tunneling mechanisms such as Multi-Protocol Label Switching (MPLS) or Provider Backbone Bridge-Traffic Engineering (PBB-TE). A traffic engineering process allows traffic flow aggregates to pass around a congested node, to better balance the load between the network elements and make sure that the QoS requirements are respected at all times. The third paper entitled “A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System” deals with QoS provisioning in FMC scenarios, especially for applications that are not directly supported by the network. As an example, all peer-to-peer applications (such as online gaming) that represent a small fraction of the total peer-to-peer traffic or those that are new and relatively unknown. Second and third generation networks were designed such that the User Equipment (UE) would provide the network with the application’s QoS parameters. However, the number of possible combinations of QoS parameters was very large and too complex to manage. As a result, for the fourth generation of networks, an application server would provide the PCC architecture with the right QoS parameters. In addition, a limited number of QoS classes were defined which in the end greatly simplified QoS management. When FMC aspects are taken into account, it becomes trivial that the above mechanism only applies to 3GPP accesses. Indeed, each access type uses its own mechanisms that must often be controlled by the network instead of the user. Moreover, some accesses don’t feature a control channel on which QoS reservation requests would be carried. Also, existing QoS protocols are often too heavy to support and apply

    Optimizing transmission protocols for enhancement of quality of service in telemedical realtime applications

    Get PDF
    In der Dissertation mit dem Titel „Optimierung von Übertragungsprotokollen zur Verbesserung der Dienstgüte in telemedizinischen Echtzeitanwendungen“ geht es um die Entwicklung einer Protokollmodifikation für Multipath-TCP zur besseren Unterstützung von telemedizinischen Echtzeitanwendungen über das Internet in einem Szenario zwischen global verteilten Standorten. Das Ziel von redundantem Multipath-TCP (rMPTCP) ist es, mehrere Verbindungen gleichzeitig zu nutzen, um mithilfe von Redundanz Verzögerungsspitzen und Datenverluste auszugleichen und somit die Dienstgüte der Übertragung zu verbessern. Hierbei passt sich das Protokoll den aktuellen Gegebenheiten der Datenverbindung adaptiv an, indem es Redundanz, Leitungsqualität, benötigte Datenübertragungsrate sowie den durch das Netzwerk angebotenen Datendurchsatz in Relation setzt. Anwendungen in der Telemedizin unterscheiden sich in ihren kommunikativen und interaktiven Ausprägungen und damit in ihren Dienstgüteanforderungen. Zu diesem Zweck werden grundlegende Anwendungen in der Telemedizin sowie Spezifizierungen der Dienstgüteanforderungen der anfallenden Datenströme behandelt und klassifiziert. Dies geschieht in Hinblick auf ein zwischen der Universität Duisburg-Essen (UDE) und der Universiti Kebangsaan Malaysia (UKM) laufendes Forschungsszenario. Darauf folgt eine Darlegung von Dienstgüte-Mechanismen im Internet. Darin werden die elementaren Funktionsweisen sowie Möglichkeiten diese zu verbessern beschrieben. Die Übertragungsstrecke zwischen den beiden Universitäten wird entsprechend des Basisszenarios in Hinblick auf verschiedene Dienstgüteparameter mit entsprechenden Messwerkzeugen soweit ausgewertet, dass Gegebenheiten und Probleme identifiziert werden können. Eine Evaluierung der verschiedenen verfügbaren Verbindungen zwischen UDE und UKM dient der Ermittlung einer kombinierten Nutzungsweise und den Möglichkeiten bei einer Mehrfachverbindung. Die Modellierung und Entwicklung der Protokollmodifikation wird unter den vorher hergeleiteten Anforderungen durchgeführt. Es werden die grundsätzlichen mathematischen Zusammenhänge diskutiert und eine Einführung in die Funktionalitäten des Protokolls gegeben. Die Eigenschaften und Funktionen des neuen Protokolls werden modelliert und zusätzliche Hilfsmittel, die für die Anwendung innerhalb des Szenarios benötigt werden entwickelt. Die Funktionalitäten werden in praktischen Versuchen ausgewertet und eine abschließende Beurteilung diskutiert. Das Ergebnis dieser Dissertation ist die Entwicklung einer internetkompatiblen redundanten Protokollerweiterung für Multipath-TCP, die in der Lage ist, sich mithilfe verschiedener Algorithmen auf Situationen im Netzwerk anzupassen und verschiedene Maßnahmen bei Störungen zu ergreifen. Die Protokollerweiterung ist in der Lage, eine Dienstgüteverbesserung in Hinblick auf Verzögerungen sowie Verzögerungsvarianzen für Anwendungen mit „Nahe-Echtzeit“-Anforderungen zu erreichen.In the dissertation entitled "Optimization of Transmission Protocols to Improve Quality of Service in Telemedicine Real-Time Applications", the aim is to develop a protocol modification for Multipath-TCP to improve the support for telemedical real-time applications over the Internet in a scenario between globally distributed locations. The goal of redundant Multipath TCP (rMPTCP) is to use multiple connections at the same time to compensate for delay and data losses by means of redundancy and thus to improve the quality of service of the transmission. The protocol adapts to the current circumstances of the data connection by correlating redundancy, connection quality, required data transmission rate as well as the data throughput offered by the network. Applications in telemedicine differ in their communicative and interactive manifestations and thus in their quality of service requirements. For this purpose, basic applications in telemedicine as well as specifications of the quality requirements of the resulting data streams are treated and classified. This is done with regard to a research scenario running between the University of Duisburg-Essen (UDE) and the Universiti Kebangsaan Malaysia (UKM). It is followed by a presentation of quality of service mechanisms on the Internet. The basic functions as well as ways to improve them are described. The transmission distance between the two universities is evaluated according to the basic scenario with regard to different quality of service parameters with appropriate measuring tools in a way that conditions and problems can be identified. An evaluation of the various available connections between UDE and UKM is used to determine a combined usage and the possibilities for a multiple connection. The modeling and development of the protocol modification is performed under the previously-derived requirements. The basic mathematical connections are discussed and an introduction to the functionalities of the protocol is given. The properties and functions of the new protocol are modeled and additional tools developed for utilization within the scenario are designed. The functionalities are evaluated in practical tests and a final assessment is discussed. The result of this dissertation is the development of an Internet-compatible redundant protocol extension for multipath TCP, which is able to adapt to situations in the network by means of different algorithms and to take various measures in case of disturbances. The protocol extension is capable of achieving an improvement in service quality with regard to delays as well as delay variances for applications with "near real-time" requirements
    corecore