20 research outputs found

    Internet-of-Things Streaming over Realtime Transport Protocol : A reusablility-oriented approach to enable IoT Streaming

    Get PDF
    The Internet of Things (IoT) as a group of technologies is gaining momentum to become a prominent factor for novel applications. The existence of high computing capability and the vast amount of IoT devices can be observed in the market today. However, transport protocols are also required to bridge these two advantages. This thesis discussed the delivery of IoT through the lens of a few selected streaming protocols, which are Realtime Transport Protocol(RTP) and its cooperatives like RTP Control Protocol(RTCP) and Session Initiation Protocol (SIP). These protocols support multimedia content transfer with a heavy-stream characteristic requirement. The main contribution of this work was the multi-layer reusability schema for IoT streaming over RTP. IoT streaming as a new concept was defined, and its characteristics were introduced to clarify its requirements. After that, the RTP stacks and their commercial implementation-VoLTE(Voice over LTE) were investigated to collect technical insights. Based on this distilled knowledge, the application areas for IoT usage and the adopting methods were described. In addition to the realization, prototypes were made to be a proof of concept for streaming IoT data with RTP functionalities on distanced devices. These prototypes proved the possibility of applying the same duo-plane architect (signaling/data transferring) widely used in RTP implementation for multimedia services. Following a standard IETF, this implementation is a minimal example of adopting an existing standard for IoT streaming applications

    Quality of service and dependability of cellular vehicular communication networks

    Get PDF
    Improving the dependability of mobile network applications is a complicated task for many reasons: Especially in Germany, the development of cellular infrastructure has not always been fast enough to keep up with the growing demand, resulting in many blind spots that cause communication outages. However, even when the infrastructure is available, the mobility of the users still poses a major challenge when it comes to the dependability of applications: As the user moves, the capacity of the channel can experience major changes. This can mean that applications like adjustable bitrate video streaming cannot infer future performance by analyzing past download rates, as it will only have old information about the data rate at a different location. In this work, we explore the use of 4G LTE for dependable communication in mobile vehicular scenarios. For this, we first look at the performance of LTE, especially in mobile environments, and how it has developed over time. We compare measurements performed several years apart and look at performance differences in urban and rural areas. We find that even though the continued development of the 4G standard has enabled better performance in theory, this has not always been reflected in real-life performance due to the slow development of infrastructure, especially along highways. We also explore the possibility of performance prediction in LTE networks without the need to perform active measurements. For this, we look at the relationship between the measured signal quality and the achievable data rates and latencies. We find that while there is a strong correlation between some of the signal quality indicators and the achievable data rates, the relationship between them is stochastic, i.e., a higher signal quality makes better performance more probable but does not guarantee it. We then use our empirical measurement results as a basis for a model that uses signal quality measurements to predict a throughput distribution. The resulting estimate of the obtainable throughput can then be used in adjustable bitrate applications like video streaming to improve their dependability. Mobile networks also task TCP congestion control algorithms with a new challenge: Usually, senders use TCP congestion control to avoid causing congestion in the network by sending too many packets and so that the network bandwidth is divided fairly. This can be a challenging task since it is not known how many senders are in the network, and the network load can change at any time. In mobile vehicular networks, TCP congestion control is confronted with the additional problem of a constantly changing capacity: As users change their location, the quality of the channel also changes, and the capacity of the channel can experience drastic reductions even when the difference of location is very small. Additionally, in our measurements, we have observed that packet losses only rarely occur (and instead, packets are delayed and retransmitted), meaning that loss-based algorithms like Reno or CUBIC can be at a significant disadvantage. In this thesis, we compare several popular congestion control algorithms in both stationary and mobile scenarios. We find that many loss-based algorithms tend to cause bufferbloat and thus overly increase delays. At the same time, many delay-based algorithms tend to underestimate the network capacity and thus achieve data rates that are too low. The algorithm that performed the best in our measurements was TCP BBR, as it was able to utilize the full capacity of the channel without causing bufferbloat and also react to changes in capacity by adjusting its window. However, since TCP BBR can be unfair towards other algorithms in wired networks, its use could be problematic. Finally, we also propose how our model for data rate prediction can be used to improve the dependability of mobile video streaming. For this, we develop an algorithm for adaptive bitrate streaming that provides a guarantee that the video freeze probability does not exceed a certain pre-selected upper threshold. For the algorithm to work, it needs to know the distribution of obtainable throughput. We use a simulation to verify the function of this algorithm using a distribution obtained through the previously proposed data rate prediction algorithm. In our simulation, the algorithm limited the video freeze probability as intended. However, it did so at the cost of frequent switches of video bitrate, which can diminish the quality of user experience. In future work, we want to explore the possibility of different algorithms that offer a trade-off between the video freeze probability and the frequency of bitrate switches.Die Verbesserung der Zuverlässigkeit von mobilen Netzwerk-basierten Anwendungen ist aus vielen Gründen eine komplizierte Aufgabe: Vor allem in Deutschland war die Entwicklung der Mobilfunkinfrastruktur nicht immer schnell genug, um mit der wachsenden Nachfrage Schritt zu halten. Es gibt immer noch viele Funklöchern, die für Kommunikationsausfälle verantwortlich sind. Aber auch an Orten, an denen Infrastruktur ausreichend vorhanden ist, stellt die Mobilität der Nutzer eine große Herausforderung für die Zuverlässigkeit der Anwendungen dar: Wenn sich der Nutzer bewegt, kann sich die Kapazität des Kanals stark verändern. Dies kann dazu führen, dass Anwendungen wie Videostreaming mit einstellbarer Bitrate die in der Vergangenheit erreichten Downloadraten nicht zur Vorhersage der zukünftigen Leistung nutzen können, da diese nur alte Informationen über die Datenraten an einem anderen Standort enthalten. In dieser Arbeit untersuchen wir die Nutzung von 4G LTE für zuverlässige Kommunikation in mobilen Fahrzeugszenarien. Zu diesem Zweck untersuchen wir zunächst die Leistung von LTE, insbesondere in mobilen Umgebungen, und wie sie sich im Laufe der Zeit entwickelt hat. Wir vergleichen Messungen, die in einem zeitlichen Abstand von mehreren Jahren durchgeführt wurden, und untersuchen Leistungsunterschiede in städtischen und ländlichen Gebieten. Wir stellen fest, dass die kontinuierliche Weiterentwicklung des 4G-Standards zwar theoretisch eine bessere Leistung ermöglicht hat, dass sich dies aber aufgrund des langsamen Ausbaus der Infrastruktur, insbesondere entlang von Autobahnen, nicht immer in der Praxis bemerkbar gemacht hat. Wir untersuchen auch die Möglichkeit der Leistungsvorhersage in LTE-Netzen, ohne aktive Messungen durchführen zu müssen. Zu diesem Zweck untersuchen wir die Beziehung zwischen der gemessenen Signalqualität und den erreichbaren Datenraten und Latenzzeiten. Wir stellen fest, dass es zwar eine starke Korrelation zwischen einigen der Signalqualitätsindikatoren und den erreichbaren Datenraten gibt, die Beziehung zwischen ihnen aber stochastisch ist, d. h. eine höhere Signalqualität macht eine bessere Leistung zwar wahrscheinlicher, garantiert sie aber nicht. Wir verwenden dann unsere empirischen Messergebnisse als Grundlage für ein Modell, das die Signalqualitätsmessungen zur Vorhersage einer Durchsatzverteilung nutzt. Die sich daraus ergebende Schätzung des erzielbaren Durchsatzes kann dann in Anwendungen mit einstellbarer Bitrate wie Videostreaming verwendet werden, um deren Zuverlässigkeit zu verbessern. Mobile Netze stellen auch TCP Congestion Control Algorithmen vor eine neue Herausforderung: Normalerweise verwenden Sender TCP Congestion Control, um eine Überlastung des Netzes durch das Senden von zu vielen Paketen zu vermeiden, und um die Bandbreite des Netzes gerecht aufzuteilen. Dies kann eine schwierige Aufgabe sein, da es nicht bekannt ist, wie viele Sender sich im Netz befinden, und sich die Netzlast jederzeit ändern kann. In mobilen Fahrzeugnetzen ist TCP Congestion Control mit dem zusätzlichen Problem einer sich ständig ändernden Kapazität konfrontiert: Wenn die Benutzer ihren Standort wechseln, ändert sich auch die Qualität des Kanals, und die Kanalkapazität des Kanals kann drastisch sinken, selbst wenn der Unterschied zwischen den Standorten sehr gering ist. Darüber hinaus haben wir bei unseren Messungen festgestellt, dass Paketverluste nur selten auftreten (stattdessen werden Pakete verzögert und erneut übertragen), was bedeutet, dass verlustbasierte Algorithmen wie Reno oder CUBIC einen großen Nachteil haben können. In dieser Arbeit vergleichen wir mehrere gängige Congestion Control Algorithmen sowohl in stationären als auch in mobilen Szenarien. Wir stellen fest, dass viele verlustbasierte Algorithmen dazu neigen, einen Pufferüberlauf zu verursachen und somit die Latenzen übermäßig erhöhen, während viele latenzbasierte Algorithmen dazu neigen, die Kanalkapazität zu unterschätzen und somit zu niedrige Datenraten erzielen. Der Algorithmus, der bei unseren Messungen am besten abgeschnitten hat, war TCP BBR, da er in der Lage war, die volle Kapazität des Kanals auszunutzen, ohne den Pufferfüllstand übermäßig zu erhöhen. Ebenso hat TCP BBR schnell auf Kapazitätsänderungen reagiert, indem er seine Fenstergröße angepasst hat. Da TCP BBR jedoch in kabelgebundenen Netzen gegenüber anderen Algorithmen unfair sein kann, könnte seine Verwendung problematisch sein. Schließlich schlagen wir auch vor, wie unser Modell zur Vorhersage von Datenraten verwendet werden kann, um die Zuverlässigkeit des mobilen Videostreaming zu verbessern. Dazu entwickeln wir einen Algorithmus für Streaming mit adaptiver Bitrate, der garantiert, dass die Wahrscheinlichkeit des Anhaltens eines Videos eine bestimmte, vorher festgelegte Obergrenze nicht überschreitet. Damit der Algorithmus funktionieren kann, muss er die Verteilung des erreichbaren Durchsatzes kennen. Wir verwenden eine Simulation, um die Funktion dieses Algorithmus zu überprüfen. Hierzu verwenden wir eine Verteilung, die wir durch den zuvor vorgeschlagenen Algorithmus zur Vorhersage von Datenraten erhalten haben. In unserer Simulation begrenzte der Algorithmus die Wahrscheinlichkeit des Anhaltens von Videos wie beabsichtigt, allerdings um den Preis eines häufigen Wechsels der Videobitrate, was die Qualität der Benutzererfahrung beeinträchtigen kann. In zukünftigen Arbeiten wollen wir die Möglichkeit verschiedener Algorithmen untersuchen, die einen Kompromiss zwischen der Wahrscheinlichkeit des Anhaltens des Videos und der Häufigkeit der Bitratenwechsel bieten

    Analyzing Energy Efficiency for IoT Devices with DRX Capability and Poisson Arrivals

    Get PDF
    Energy-efficient communications is one important consideration for Internet of Things (IoT) devices, and it can be achieved via the discontinuous reception (DRX) technology. In this paper, we consider an IoT device with the DRX capability. The device is functioning based on the LTE standard and it is communicating with the base station over a Nakagami- m fading channel. Data are generated with fixed length and Poisson processes. Under these settings, we develop a cross-layer analytical model to analyze 1) the energy efficiency, 2) stationary probability and 3) state holding time of this device. Simulation results show that the proposed model can approximate the three performance of a IoT device accurately

    Towards a programmable and virtualized mobile radio access network architecture

    Get PDF
    Emerging 5G mobile networks are envisioned to become multi-service environments, enabling the dynamic deployment of services with a diverse set of performance requirements, accommodating the needs of mobile network operators, verticals and over-the-top service providers. The Radio Access Network (RAN) part of mobile networks is expected to play a very significant role towards this evolution. Unfortunately, such a vision cannot be efficiently supported by the conventional RAN architecture, which adopts a fixed and rigid design. For the network to evolve, flexibility in the creation, management and control of the RAN components is of paramount importance. The key elements that can allow us to attain this flexibility are the programmability and the virtualization of the network functions. While in the case of the mobile core, these issues have been extensively studied due to the advent of technologies like Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) and the similarities that the core shares with other wired networks like data centers, research in the domain of the RAN is still in its infancy. The contributions made in this thesis significantly advance the state of the art in the domain of RAN programmability and virtualization in three dimensions. First, we design and implement a software-defined RAN (SD-RAN) platform called FlexRAN, that provides a flexible control plane designed with support for real-time RAN control applications, flexibility to realize various degrees of coordination among RAN infrastructure entities, and programmability to adapt control over time and easier evolution to the future following SDN/NFV principles. Second, we leverage the capabilities of the FlexRAN platform to design and implement Orion, which is a novel RAN slicing system that enables the dynamic on-the-fly virtualization of base stations, the flexible customization of slices to meet their respective service needs and which can be used in an end-to-end network slicing setting. Third, we focus on the use case of multi-tenancy in a neutral-host indoors small-cell environment, where we design Iris, a system that builds on the capabilities of FlexRAN and Orion and introduces a dynamic pricing mechanism for the efficient and flexible allocation of shared spectrum to the tenants. A number of additional use cases that highlight the benefits of the developed systems are also presented. The lessons learned through this research are summarized and a discussion is made on interesting topics for future work in this domain. The prototype systems presented in this thesis have been made publicly available and are being used by various research groups worldwide in the context of 5G research

    Seamless Multimedia Delivery Within a Heterogeneous Wireless Networks Environment: Are We There Yet?

    Get PDF
    The increasing popularity of live video streaming from mobile devices, such as Facebook Live, Instagram Stories, Snapchat, etc. pressurizes the network operators to increase the capacity of their networks. However, a simple increase in system capacity will not be enough without considering the provisioning of quality of experience (QoE) as the basis for network control, customer loyalty, and retention rate and thus increase in network operators revenue. As QoE is gaining strong momentum especially with increasing users' quality expectations, the focus is now on proposing innovative solutions to enable QoE when delivering video content over heterogeneous wireless networks. In this context, this paper presents an overview of multimedia delivery solutions, identifies the problems and provides a comprehensive classification of related state-of-the-art approaches following three key directions: 1) adaptation; 2) energy efficiency; and 3) multipath content delivery. Discussions, challenges, and open issues on the seamless multimedia provisioning faced by the current and next generation of wireless networks are also provided

    Seamless multimedia delivery within a heterogeneous wireless networks environment: are we there yet?

    Get PDF
    The increasing popularity of live video streaming from mobile devices such as Facebook Live, Instagram Stories, Snapchat, etc. pressurises the network operators to increase the capacity of their networks. However, a simple increase in system capacity will not be enough without considering the provisioning of Quality of Experience (QoE) as the basis for network control, customer loyalty and retention rate and thus increase in network operators revenue. As QoE is gaining strong momentum especially with increasing users’ quality expectations, the focus is now on proposing innovative solutions to enable QoE when delivering video content over heterogeneous wireless networks. In this context, this paper presents an overview of multimedia delivery solutions, identifies the problems and provides a comprehensive classification of related state-of-the-art approaches following three key directions: adaptation, energy efficiency and multipath content delivery. Discussions, challenges and open issues on the seamless multimedia provisioning faced by the current and next generation of wireless networks are also provided

    XIII Jornadas de ingeniería telemática (JITEL 2017)

    Full text link
    Las Jornadas de Ingeniería Telemática (JITEL), organizadas por la Asociación de Telemática (ATEL), constituyen un foro propicio de reunión, debate y divulgación para los grupos que imparten docencia e investigan en temas relacionados con las redes y los servicios telemáticos. Con la organización de este evento se pretende fomentar, por un lado el intercambio de experiencias y resultados, además de la comunicación y cooperación entre los grupos de investigación que trabajan en temas relacionados con la telemática. En paralelo a las tradicionales sesiones que caracterizan los congresos científicos, se desea potenciar actividades más abiertas, que estimulen el intercambio de ideas entre los investigadores experimentados y los noveles, así como la creación de vínculos y puntos de encuentro entre los diferentes grupos o equipos de investigación. Para ello, además de invitar a personas relevantes en los campos correspondientes, se van a incluir sesiones de presentación y debate de las líneas y proyectos activos de los mencionados equiposLloret Mauri, J.; Casares Giner, V. (2018). XIII Jornadas de ingeniería telemática (JITEL 2017). Editorial Universitat Politècnica de València. http://hdl.handle.net/10251/97612EDITORIA

    Prediction-based techniques for the optimization of mobile networks

    Get PDF
    Mención Internacional en el título de doctorMobile cellular networks are complex system whose behavior is characterized by the superposition of several random phenomena, most of which, related to human activities, such as mobility, communications and network usage. However, when observed in their totality, the many individual components merge into more deterministic patterns and trends start to be identifiable and predictable. In this thesis we analyze a recent branch of network optimization that is commonly referred to as anticipatory networking and that entails the combination of prediction solutions and network optimization schemes. The main intuition behind anticipatory networking is that knowing in advance what is going on in the network can help understanding potentially severe problems and mitigate their impact by applying solution when they are still in their initial states. Conversely, network forecast might also indicate a future improvement in the overall network condition (i.e. load reduction or better signal quality reported from users). In such a case, resources can be assigned more sparingly requiring users to rely on buffered information while waiting for the better condition when it will be more convenient to grant more resources. In the beginning of this thesis we will survey the current anticipatory networking panorama and the many prediction and optimization solutions proposed so far. In the main body of the work, we will propose our novel solutions to the problem, the tools and methodologies we designed to evaluate them and to perform a real world evaluation of our schemes. By the end of this work it will be clear that not only is anticipatory networking a very promising theoretical framework, but also that it is feasible and it can deliver substantial benefit to current and next generation mobile networks. In fact, with both our theoretical and practical results we show evidences that more than one third of the resources can be saved and even larger gain can be achieved for data rate enhancements.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Albert Banchs Roca.- Presidente: Pablo Serrano Yañez-Mingot.- Secretario: Jorge Ortín Gracia.- Vocal: Guevara Noubi
    corecore