470 research outputs found

    A Cross-layer Approach for MPTCP Path Management in Heterogeneous Vehicular Networks

    Get PDF
    Multipath communication has recently arisen as a promising tool to address reliable communication in vehicular networks. The architecture of Multipath TCP (MPTCP) is designed to facilitate concurrent utilization of multiple network interfaces, thereby enabling the system to optimize network throughput. In the context of vehicular environments, MPTCP offers a promising solution for seamless roaming, as it enables the system to maintain a stable connection by switching between available network interfaces. This paper investigates the suitability of MPTCP to support resilient and efficient Vehicleto-Infrastructure (V2I) communication over heterogeneous networks. First, we identify and discuss several challenges that arise in heterogeneous vehicular networks, including issues such as Head-of-Line (HoL) blocking and service interruptions during handover events. Then, we propose a cross-layer path management scheme for MPTCP, that leverages real-time network information to improve the reliability and efficiency of multipath vehicular communication. Our emulation results demonstrate that the proposed scheme not only achieves seamless mobility across heterogeneous networks but also significantly reduces handover latency, packet loss, and out-of-order packet delivery. These improvements have a direct impact on the quality of experience for vehicular users, as they lead to lower application layer delay and higher throughput

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Resource Management and Backhaul Routing in Millimeter-Wave IAB Networks Using Deep Reinforcement Learning

    Get PDF
    Thesis (PhD (Electronic Engineering))--University of Pretoria, 2023..The increased densification of wireless networks has led to the development of integrated access and backhaul (IAB) networks. In this thesis, deep reinforcement learning was applied to solve resource management and backhaul routing problems in millimeter-wave IAB networks. In the research work, a resource management solution that aims to avoid congestion for access users in an IAB network was proposed and implemented. The proposed solution applies deep reinforcement learning to learn an optimized policy that aims to achieve effective resource allocation whilst minimizing congestion and satisfying the user requirements. In addition, a deep reinforcement learning-based backhaul adaptation strategy that leverages a recursive discrete choice model was implemented in simulation. Simulation results where the proposed algorithms were compared with two baseline methods showed that the proposed scheme provides better throughput and delay performance.Sentech Chair in Broadband Wireless Multimedia Communications.Electrical, Electronic and Computer EngineeringPhD (Electronic Engineering)Unrestricte

    Situation-aware Edge Computing

    Get PDF
    Future wireless networks must cope with an increasing amount of data that needs to be transmitted to or from mobile devices. Furthermore, novel applications, e.g., augmented reality games or autonomous driving, require low latency and high bandwidth at the same time. To address these challenges, the paradigm of edge computing has been proposed. It brings computing closer to the users and takes advantage of the capabilities of telecommunication infrastructures, e.g., cellular base stations or wireless access points, but also of end user devices such as smartphones, wearables, and embedded systems. However, edge computing introduces its own challenges, e.g., economic and business-related questions or device mobility. Being aware of the current situation, i.e., the domain-specific interpretation of environmental information, makes it possible to develop approaches targeting these challenges. In this thesis, the novel concept of situation-aware edge computing is presented. It is divided into three areas: situation-aware infrastructure edge computing, situation-aware device edge computing, and situation-aware embedded edge computing. Therefore, the concepts of situation and situation-awareness are introduced. Furthermore, challenges are identified for each area, and corresponding solutions are presented. In the area of situation-aware infrastructure edge computing, economic and business-related challenges are addressed, since companies offering services and infrastructure edge computing facilities have to find agreements regarding the prices for allowing others to use them. In the area of situation-aware device edge computing, the main challenge is to find suitable nodes that can execute a service and to predict a node’s connection in the near future. Finally, to enable situation-aware embedded edge computing, two novel programming and data analysis approaches are presented that allow programmers to develop situation-aware applications. To show the feasibility, applicability, and importance of situation-aware edge computing, two case studies are presented. The first case study shows how situation-aware edge computing can provide services for emergency response applications, while the second case study presents an approach where network transitions can be implemented in a situation-aware manner

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    A Survey on the Communication Protocols and Security in Cognitive Radio Networks

    Get PDF
    A cognitive radio (CR) is a radio that can change its transmission parameters based on the perceived availability of the spectrum bands in its operating environment. CRs support dynamic spectrum access and can facilitate a secondary unlicensed user to efficiently utilize the available underutilized spectrum allocated to the primary licensed users. A cognitive radio network (CRN) is composed of both the secondary users with CR-enabled radios and the primary users whose radios need not be CR-enabled. Most of the active research conducted in the area of CRNs has been so far focused on spectrum sensing, allocation and sharing. There is no comprehensive review paper available on the strategies for medium access control (MAC), routing and transport layer protocols, and the appropriate representative solutions for CRNs. In this paper, we provide an exhaustive analysis of the various techniques/mechanisms that have been proposed in the literature for communication protocols (at the MAC, routing and transport layers), in the context of a CRN, as well as discuss in detail several security attacks that could be launched on CRNs and the countermeasure solutions that have been proposed to avoid or mitigate them. This paper would serve as a good comprehensive review and analysis of the strategies for MAC, routing and transport protocols and security issues for CRNs as well as would lay a strong foundation for someone to further delve onto any particular aspect in greater depth

    Comunicações veiculares híbridas

    Get PDF
    Vehicle Communications is a promising research field, with a great potential for the development of new applications capable of improving road safety, traffic efficiency, as well as passenger comfort and infotainment. Vehicle communication technologies can be short-range, such as ETSI ITS-G5 or the 5G PC5 sidelink channel, or long-range, using the cellular network (LTE or 5G). However, none of the technologies alone can support the expected variety of applications for a large number of vehicles, nor all the temporal and spatial requirements of connected and autonomous vehicles. Thus, it is proposed the collaborative or hybrid use of short-range communications, with lower latency, and of long-range technologies, potentially with higher latency, but integrating aggregated data of wider geographic scope. In this context, this work presents a hybrid vehicle communications model, capable of providing connectivity through two Radio Access Technologies (RAT), namely, ETSI ITS-G5 and LTE, to increase the probability of message delivery and, consequently, achieving a more robust, efficient and secure vehicle communication system. The implementation of short-range communication channels is done using Raw Packet Sockets, while the cellular connection is established using the Advanced Messaging Queuing Protocol (AMQP) protocol. The main contribution of this dissertation focuses on the design, implementation and evaluation of a Hybrid Routing Sublayer, capable of isolating messages that are formed/decoded from transmission/reception processes. This layer is, therefore, capable of managing traffic coming/destined to the application layer of intelligent transport systems (ITS), adapting and passing ITS messages between the highest layers of the protocol stack and the available radio access technologies. The Hybrid Routing Sublayer also reduces the financial costs due to the use of cellular communications and increases the efficiency of the use of the available electromagnetic spectrum, by introducing a cellular link controller using a Beacon Detector, which takes informed decisions related to the need to connect to a cellular network, according to different scenarios. The experimental results prove that hybrid vehicular communications meet the requirements of cooperative intelligent transport systems, by taking advantage of the benefits of both communication technologies. When evaluated independently, the ITS-G5 technology has obvious advantages in terms of latency over the LTE technology, while the LTE technology performs better than ITS-G5, in terms of throughput and reliability.As Comunicações Veiculares são um campo de pesquisa promissor, com um grande potencial de desenvolvimento de novas aplicações capazes de melhorar a segurança nas estradas, a eficiência do tráfego, bem com o conforto e entretenimento dos passageiros. As tecnologias de comunicação veícular podem ser de curto alcance, como por exemplo ETSI ITS-G5 ou o canal PC5 do 5G, ou de longo alcance, recorrendo à rede celular (LTE ou 5G). No entanto, nenhuma das tecnologias por si só, consegue suportar a variedade expectável de aplicações para um número de veículos elevado nem tampouco todos os requisitos temporais e espaciais dos veículos conectados e autónomos. Assim, é proposto o uso colaborativo ou híbrido de comunicações de curto alcance, com latências menores, e de tecnologias de longo alcance, potencialmente com maiores latências, mas integrando dados agregados de maior abrangência geográfica. Neste contexto, este trabalho apresenta um modelo de comunicações veiculares híbrido, capaz de fornecer conectividade por meio de duas Tecnologias de Acesso por Rádio (RAT), a saber, ETSI ITS-G5 e LTE, para aumentar a probabilidade de entrega de mensagens e, consequentemente, alcançar um sistema de comunicação veicular mais robusto, eficiente e seguro. A implementação de canais de comunicação de curto alcance é feita usando Raw Packet Sockets, enquanto que a ligação celular é estabelecida usando o protocolo Advanced Messaging Queuing Protocol (AMQP). A contribuição principal desta dissertação foca-se no projeto, implementação e avaliação de uma sub camada hibrída de encaminhamento, capaz de isolar mensagens que se formam/descodificam a partir de processos de transmissão/receção. Esta camadada é, portanto, capaz de gerir o tráfego proveniente/destinado à camada de aplicação de sistemas inteligentes de transportes (ITS) adaptando e passando mensagens ITS entre as camadas mais altas da pilha protocolar e as tecnologias de acesso rádio disponíveis. A sub camada hibrída de encaminhamento também potencia uma redução dos custos financeiros devidos ao uso de comunicações celulares e aumenta a eficiência do uso do espectro electromagnético disponível, ao introduzir um múdulo controlador da ligação celular, utilizando um Beacon Detector, que toma decisões informadas relacionadas com a necessidade de uma conexão a uma rede celular, de acordo com diferentes cenários. Os resultados experimentais comprovam que as comunicações veículares híbridas cumprem os requisitos dos sistemas cooperativos de transporte inteligentes, ao tirarem partido das vantagens de ambas tecnologias de comunicação. Quando avaliadas de forma independente, constata-se que que a tecnologia ITS-G5 tem vantagens evidentes em termos de latência sobre a tecnologia LTE, enquanto que a tecnologia LTE tem melhor desempenho que a LTE, ai nível de débito e fiabilidade.Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    Ohjelmoitava saumaton moniliitettävyys

    Get PDF
    Our devices have become accustomed to being always connected to the Internet. Our devices from handheld devices, such as smartphones and tablets, to our laptops and even desktop PCs are capable of using both wired and wireless networks, ranging from mobile networks such as 5G or 6G in the future to Wi-Fi, Bluetooth, and Ethernet. The applications running on the devices can use different transport protocols from traditional TCP and UDP to state-of-the-art protocols such as QUIC. However, most of our applications still use TCP, UDP, and other protocols in a similar way as they were originally designed in the 1980s, four decades ago. The transport connections are a single path from the source to the destination, using the end-to-end principle without taking advantage of the multiple available transports. Over the years, there have been a lot of studies on both multihoming and multipath protocols, i.e., allowing transports to use multiple paths and interfaces to the destination. Using these would allow better mobility and more efficient use of available transports. However, Internet ossification has hindered their deployment. One of the main reasons for the ossification is the IPv4 Network Address Translation (NAT) introduced in 1993, which allowed whole networks to be hosted behind a single public IP address. Unfortunately, how this many-to-one translation should be done was not standardized thoroughly, allowing vendors to implement their own versions of NAT. While breaking the end-to-end principle, the different versions of NATs also behave unpredictably when encountering other transport protocols than the traditional TCP and UDP, from forwarding packets without translating the packet headers to even discarding the packets that they do not recognize. Similarly, in the context of multiconnectivity, NATs and other middleboxes such as firewalls and load balancers likely prevent connection establishment for multipath protocols unless they are specially designed to support that particular protocol. One promising avenue for solving these issues is Software-Defined Networking (SDN). SDN allows the forwarding elements of the network to remain relatively simple by separating the data plane from the control plane. In SDN, the control plane is realized through SDN controllers, which control how traffic is forwarded by the data plane. This allows controllers to have full control over the traffic inside the network, thus granting fine-grained control of the connections and allowing faster deployment of new protocols. Unfortunately, SDN-capable network elements are still rare in Small Office / Home Office (SOHO) networks, as legacy forwarding elements that do not support SDN can support the majority of contemporary protocols. The most glaring example is the Wi-Fi networks, where the Access Points (AP) typically do not support SDN, and allow traffic to flow between clients without the control of the SDN controllers. In this thesis, we provide a background on why multiconnectivity is still hard, even though there have been decades worth of research on solving it. We also demonstrate how the same devices that made multiconnectivity hard can be used to bring SDN-based traffic control to wireless and SOHO networks. We also explore how this SDN-based traffic control can be leveraged for building a network orchestrator for controlling and managing networks consisting of heterogeneous devices and their controllers. With the insights provided by the legacy devices and programmable networks, we demonstrate two different methods for providing multiconnectivity; one using network-driven programmability, and one using a userspace library, that brings different multihoming and multipathing methods under one roof.Nykyisin kaikki käyttämämme laitteet ovat käytännössä aina yhteydessä Internettiin. Laitteemme voivat käyttää useita erilaisia yhteystapoja, mukaanlukien sekä langallisia, että langattomia verkkoja, kuten Wi-Fi ja mobiiliverkkoja. Kuitenkin laitteemme käyttävät pääsääntöisesti edelleen tietoliikenneprotokollia, jotka suunniteltiin alunperin 1980-luvulla. Tällöin laitteet pystyivät viestimään suoraan toistensa kanssa ilman, että välissä oli verkkolaitteita, jotka piilottivat osia verkosta taakseen. Tämä näkyy protokollien suunnittelussa siten, että jokaisella yhteydellä on määritetyt lähde- ja kohdeosoitteet. Nykyisin laitteemme käyttävät edelleen samaa yhteysparadigmaa, vaikka ne voisivat niputtaa yhteen useampia tietoliikenneyhteyksiä. Tällöin saisimme paremmin käyttöön verkon tarjoaman suorituskyvyn ja muut ominaisuudet. Vuosien saatossa on kehitetty erilaisia monitie (eng. multipath) ja moniyhteys (eng. multihoming) tietoliikenneprotokollia, joiden avulla laitteet pystyvät käyttämään useampia polkuja verkon yli kohteeseensa. Nämä protokollat eivät kuitenkaan ole vielä yleistyneet, sillä kaikki verkkolaitteet eivät tue niitä. Emme myöskään pysty vaikuttamaan kuin ainoastaan epäsuorasti siihen, mitä yhteyttä laitteemme käyttävät. Yksi ratkaisu on tähän ottaa käyttöön ohjelmallisesti määritetyt verkot (eng. Software-Defined Networking, SDN). SDN on paradigma, jonka avulla verkkoihin voidaan tuoda älykkyyttä ja mahdollistaa mm. tehokkaampi liikenteen reititys verkoissa. Tämän väitöskirjatutkimuksen tarkoituksena on käsitellä moniliitettävyyden ongelmia ja ratkaisuja. Tutkimus valottaa miksi moniliitettävyys on edelleen hankala toteuttaa, sekä esittelee kaksi tekniikkaa toteuttaa moniliitettävyys. Ensimmäinen tekniikka soveltaa ohjelmallisesti määritettyjä verkkoja käyttäen hyväkseen väitöskirjan aikana tehtyä tutkimusta, ja toinen tekniikka kerää saman katon alle useita erilaisia monitie- ja moniyhteysprotokollia yhdeksi moniliitettävyyskirjastoksi. Väitöskirjassa esitellään myös kaksi menetelmää tuoda ohjelmallisesti määritetyt verkot laitteisiin, joita ei ole suunniteltu niitä silmällä pitäen. Näiden menetelmien avulla voidaan hallita ja tuoda uusia ominaisuuksia jo olemassa oleviin verkkoihin. Väitöskirjassa esitellään myös koneoppimista soveltava älykäs järjestelmä, joka havaitsee ja poistaa automaattisesti haavoittuvia laitteita verkosta
    corecore