98 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Adaptive Data-driven Optimization using Transfer Learning for Resilient, Energy-efficient, Resource-aware, and Secure Network Slicing in 5G-Advanced and 6G Wireless Systems

    Get PDF
    Title from PDF of title page, viewed January 31, 2023Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 134-141)Dissertation (Ph.D)--Department of Computer Science and Electrical Engineering. University of Missouri--Kansas City, 20225G–Advanced is the next step in the evolution of the fifth–generation (5G) technology. It will introduce a new level of expanded capabilities beyond connections and enables a broader range of advanced applications and use cases. 5G–Advanced will support modern applications with greater mobility and high dependability. Artificial intelligence and Machine Learning will enhance network performance with spectral efficiency and energy savings enhancements. This research established a framework to optimally control and manage an appropriate selection of network slices for incoming requests from diverse applications and services in Beyond 5G networks. The developed DeepSlice model is used to optimize the network and individual slice load efficiency across isolated slices and manage slice lifecycle in case of failure. The DeepSlice framework can predict the unknown connections by utilizing the learning from a developed deep-learning neural network model. The research also addresses threats to the performance, availability, and robustness of B5G networks by proactively preventing and resolving threats. The study proposed a Secure5G framework for authentication, authorization, trust, and control for a network slicing architecture in 5G systems. The developed model prevents the 5G infrastructure from Distributed Denial of Service by analyzing incoming connections and learning from the developed model. The research demonstrates the preventive measure against volume attacks, flooding attacks, and masking (spoofing) attacks. This research builds the framework towards the zero trust objective (never trust, always verify, and verify continuously) that improves resilience. Another fundamental difficulty for wireless network systems is providing a desirable user experience in various network conditions, such as those with varying network loads and bandwidth fluctuations. Mobile Network Operators have long battled unforeseen network traffic events. This research proposed ADAPTIVE6G to tackle the network load estimation problem using knowledge-inspired Transfer Learning by utilizing radio network Key Performance Indicators from network slices to understand and learn network load estimation problems. These algorithms enable Mobile Network Operators to optimally coordinate their computational tasks in stochastic and time-varying network states. Energy efficiency is another significant KPI in tracking the sustainability of network slicing. Increasing traffic demands in 5G dramatically increase the energy consumption of mobile networks. This increase is unsustainable in terms of dollar cost and environmental impact. This research proposed an innovative ECO6G model to attain sustainability and energy efficiency. Research findings suggested that the developed model can reduce network energy costs without negatively impacting performance or end customer experience against the classical Machine Learning and Statistical driven models. The proposed model is validated against the industry-standardized energy efficiency definition, and operational expenditure savings are derived, showing significant cost savings to MNOs.Introduction -- A deep neural network framework towards a resilient, efficient, and secure network slicing in Beyond 5G Networks -- Adaptive resource management techniques for network slicing in Beyond 5G networks using transfer learning -- Energy and cost analysis for network slicing deployment in Beyond 5G networks -- Conclusion and future scop

    Research on Reliable Low-Power Wide-Area Communications Utilizing Multi-RAT LPWAN Technologies for IoT Applications

    Get PDF
    Předkládaná disertační práce je zaměřena na „Výzkum spolehlivé komunikace pro IoT aplikace v bezdrátových sítích využívajících technologie Multi-RAT LPWAN“. Navzdory značnému pokroku v oblasti vývoje LPWA technologií umožňující masivní komunikace mezi zařízeními (mMTC), nemusí tyto technologie výkonnostně dostačovat pro nově vznikající aplikace internetu věcí. Hlavním cílem této disertační práce je proto nalezení a vyhodnocení limitů současných LPWA technologií. Na základě těchto dat jsou nevrženy nové mechanismy umožňující snazší plánování a vyhodnocování síťového pokrytí. Navržené nástroje jsou vyladěny a validovány s využitím dat získaných z rozsáhlých měřících kampaních provedených v zákaznických LPWA sítích. Tato disertační práce dále obsahuje návrh LPWA zařízení vybavených více komunikačními rozhraními (multi-RAT) které mohou umožnit překonání výkonnostních limitů jednotlivých LPWA technologií. Současná implementace se zaměřuje zejména na snížení spotřeby zařízení s více rádiovými rozhraními, což je jejich největší nevýhodou. K tomuto účelu je využito algoritmů strojového učení, které jsou schopné dynamicky vybírat nejvhodnější rozhraní k přenosu.This doctoral thesis addresses the “Research on Reliable Low-Power Wide-Area Communications Utilizing Multi-RAT LPWAN Technologies for IoT Applications”. Despite the immense progress in massive Machine-Type Communication (mMTC) technology enablers such as Low-Power Wide-Area (LPWA) networks, their performance does not have to satisfy the requirements of novelty Internet of Things (IoT) applications. The main goal of this Ph.D. work is to explore and evaluate the limitations of current LPWA technologies and propose novel mechanisms facilitating coverage planning and assessment. Proposed frameworks are fine-tuned and cross-validated by the extensive measurement campaigns conducted in public LPWA networks. This doctoral thesis further introduces the novelty approach of multi-RAT LPWA devices to overcome the performance limitation of individual LPWA technologies. The current implementation primarily focuses on diminishing the greatest multi-RAT solutions disadvantage, i.e., increased power consumption by employing a machine learning approach to radio interface selection.

    Optimization of Handover, Survivability, Multi-Connectivity and Secure Slicing in 5G Cellular Networks using Matrix Exponential Models and Machine Learning

    Get PDF
    Title from PDF of title page, viewed January 31, 2023Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 173-194)Dissertation (Ph.D.)--Department of Computer Science and Electrical Engineering. University of Missouri--Kansas City, 2022This works proposes optimization of cellular handovers, cellular network survivability modeling, multi-connectivity and secure network slicing using matrix exponentials and machine learning techniques. We propose matrix exponential (ME) modeling of handover arrivals with the potential to much more accurately characterize arrivals and prioritize resource allocation for handovers, especially handovers for emergency or public safety needs. With the use of a ‘B’ matrix for representing a handover arrival, we have a rich set of dimensions to model system handover behavior. We can study multiple parameters and the interactions between system events along with the user mobility, which would trigger a handoff in any given scenario. Additionally, unlike any traditional handover improvement scheme, we develop a ‘Deep-Mobility’ model by implementing a deep learning neural network (DLNN) to manage network mobility, utilizing in-network deep learning and prediction. We use the radio and the network key performance indicators (KPIs) to train our model to analyze network traffic and handover requirements. Cellular network design must incorporate disaster response, recovery and repair scenarios. Requirements for high reliability and low latency often fail to incorporate network survivability for mission critical and emergency services. Our Matrix Exponential (ME) model shows how survivable networks can be designed based on controlling numbers of crews, times taken for individual repair stages, and the balance between fast and slow repairs. Transient and the steady state representations of system repair models, namely, fast and slow repairs for networks consisting of multiple repair crews have been analyzed. Failures are exponentially modeled as per common practice, but ME distributions describe the more complex recovery processes. In some mission critical communications, the availability requirements may exceed five or even six nines (99.9999%). To meet such a critical requirement and minimize the impact of mobility during handover, a Fade Duration Outage Probability (FDOP) based multiple radio link connectivity handover method has been proposed. By applying such a method, a high degree of availability can be achieved by utilizing two or more uncorrelated links based on minimum FDOP values. Packet duplication (PD) via multi-connectivity is a method of compensating for lost packets on a wireless channel. Utilizing two or more uncorrelated links, a high degree of availability can be attained with this strategy. However, complete packet duplication is inefficient and frequently unnecessary. We provide a novel adaptive fractional packet duplication (A-FPD) mechanism for enabling and disabling packet duplication based on a variety of parameters. We have developed a ‘DeepSlice’ model by implementing Deep Learning (DL) Neural Network to manage network load efficiency and network availability, utilizing in-network deep learning and prediction. Our Neural Network based ‘Secure5G’ Network Slicing model will proactively detect and eliminate threats based on incoming connections before they infest the 5G core network elements. These will enable the network operators to sell network slicing as-a-service to serve diverse services efficiently over a single infrastructure with higher level of security and reliability.Introduction -- Matrix exponential and deep learning neural network modeling of cellular handovers -- Survivability modeling in cellular networks -- Multi connectivity based handover enhancement and adaptive fractional packet duplication in 5G cellular networks -- Deepslice and Secure5G: a deep learning framework towards an efficient, reliable and secure network slicing in 5G networks -- Conclusion and future scop

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions

    Gestion flexible des ressources dans les réseaux de nouvelle génération avec SDN

    Get PDF
    Abstract : 5G and beyond-5G/6G are expected to shape the future economic growth of multiple vertical industries by providing the network infrastructure required to enable innovation and new business models. They have the potential to offer a wide spectrum of services, namely higher data rates, ultra-low latency, and high reliability. To achieve their promises, 5G and beyond-5G/6G rely on software-defined networking (SDN), edge computing, and radio access network (RAN) slicing technologies. In this thesis, we aim to use SDN as a key enabler to enhance resource management in next-generation networks. SDN allows programmable management of edge computing resources and dynamic orchestration of RAN slicing. However, achieving efficient performance based on SDN capabilities is a challenging task due to the permanent fluctuations of traffic in next-generation networks and the diversified quality of service requirements of emerging applications. Toward our objective, we address the load balancing problem in distributed SDN architectures, and we optimize the RAN slicing of communication and computation resources in the edge of the network. In the first part of this thesis, we present a proactive approach to balance the load in a distributed SDN control plane using the data plane component migration mechanism. First, we propose prediction models that forecast the load of SDN controllers in the long term. By using these models, we can preemptively detect whether the load will be unbalanced in the control plane and, thus, schedule migration operations in advance. Second, we improve the migration operation performance by optimizing the tradeoff between a load balancing factor and the cost of migration operations. This proactive load balancing approach not only avoids SDN controllers from being overloaded, but also allows a judicious selection of which data plane component should be migrated and where the migration should happen. In the second part of this thesis, we propose two RAN slicing schemes that efficiently allocate the communication and the computation resources in the edge of the network. The first RAN slicing scheme performs the allocation of radio resource blocks (RBs) to end-users in two time-scales, namely in a large time-scale and in a small time-scale. In the large time-scale, an SDN controller allocates to each base station a number of RBs from a shared radio RBs pool, according to its requirements in terms of delay and data rate. In the short time-scale, each base station assigns its available resources to its end-users and requests, if needed, additional resources from adjacent base stations. The second RAN slicing scheme jointly allocates the RBs and computation resources available in edge computing servers based on an open RAN architecture. We develop, for the proposed RAN slicing schemes, reinforcement learning and deep reinforcement learning algorithms to dynamically allocate RAN resources.La 5G et au-delà de la 5G/6G sont censées dessiner la future croissance économique de multiples industries verticales en fournissant l'infrastructure réseau nécessaire pour permettre l'innovation et la création de nouveaux modèles économiques. Elles permettent d'offrir un large spectre de services, à savoir des débits de données plus élevés, une latence ultra-faible et une fiabilité élevée. Pour tenir leurs promesses, la 5G et au-delà de la-5G/6G s'appuient sur le réseau défini par logiciel (SDN), l’informatique en périphérie et le découpage du réseau d'accès (RAN). Dans cette thèse, nous visons à utiliser le SDN en tant qu'outil clé pour améliorer la gestion des ressources dans les réseaux de nouvelle génération. Le SDN permet une gestion programmable des ressources informatiques en périphérie et une orchestration dynamique de découpage du RAN. Cependant, atteindre une performance efficace en se basant sur le SDN est une tâche difficile due aux fluctuations permanentes du trafic dans les réseaux de nouvelle génération et aux exigences de qualité de service diversifiées des applications émergentes. Pour atteindre notre objectif, nous abordons le problème de l'équilibrage de charge dans les architectures SDN distribuées, et nous optimisons le découpage du RAN des ressources de communication et de calcul à la périphérie du réseau. Dans la première partie de cette thèse, nous présentons une approche proactive pour équilibrer la charge dans un plan de contrôle SDN distribué en utilisant le mécanisme de migration des composants du plan de données. Tout d'abord, nous proposons des modèles pour prédire la charge des contrôleurs SDN à long terme. En utilisant ces modèles, nous pouvons détecter de manière préemptive si la charge sera déséquilibrée dans le plan de contrôle et, ainsi, programmer des opérations de migration à l'avance. Ensuite, nous améliorons les performances des opérations de migration en optimisant le compromis entre un facteur d'équilibrage de charge et le coût des opérations de migration. Cette approche proactive d'équilibrage de charge permet non seulement d'éviter la surcharge des contrôleurs SDN, mais aussi de choisir judicieusement le composant du plan de données à migrer et l'endroit où la migration devrait avoir lieu. Dans la deuxième partie de cette thèse, nous proposons deux mécanismes de découpage du RAN qui allouent efficacement les ressources de communication et de calcul à la périphérie des réseaux. Le premier mécanisme de découpage du RAN effectue l'allocation des blocs de ressources radio (RBs) aux utilisateurs finaux en deux échelles de temps, à savoir dans une échelle de temps large et dans une échelle de temps courte. Dans l’échelle de temps large, un contrôleur SDN attribue à chaque station de base un certain nombre de RB à partir d'un pool de RB radio partagé, en fonction de ses besoins en termes de délai et de débit. Dans l’échelle de temps courte, chaque station de base attribue ses ressources disponibles à ses utilisateurs finaux et demande, si nécessaire, des ressources supplémentaires aux stations de base adjacentes. Le deuxième mécanisme de découpage du RAN alloue conjointement les RB et les ressources de calcul disponibles dans les serveurs de l’informatique en périphérie en se basant sur une architecture RAN ouverte. Nous développons, pour les mécanismes de découpage du RAN proposés, des algorithmes d'apprentissage par renforcement et d'apprentissage par renforcement profond pour allouer dynamiquement les ressources du RAN

    Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks

    Get PDF
    The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major interest from both academic and industrial stakeholders. Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed. End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects. To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&

    Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives

    Full text link
    © 1998-2012 IEEE. Future 5th generation networks are expected to enable three key services-enhanced mobile broadband, massive machine type communications and ultra-reliable and low latency communications (URLLC). As per the 3rd generation partnership project URLLC requirements, it is expected that the reliability of one transmission of a 32 byte packet will be at least 99.999% and the latency will be at most 1 ms. This unprecedented level of reliability and latency will yield various new applications, such as smart grids, industrial automation and intelligent transport systems. In this survey we present potential future URLLC applications, and summarize the corresponding reliability and latency requirements. We provide a comprehensive discussion on physical (PHY) and medium access control (MAC) layer techniques that enable URLLC, addressing both licensed and unlicensed bands. This paper evaluates the relevant PHY and MAC techniques for their ability to improve the reliability and reduce the latency. We identify that enabling long-term evolution to coexist in the unlicensed spectrum is also a potential enabler of URLLC in the unlicensed band, and provide numerical evaluations. Lastly, this paper discusses the potential future research directions and challenges in achieving the URLLC requirements

    5G 이후 무선 네트워크를 위한 무선 접속 기술 향상 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2020. 8. 박세웅.Recently, operators are creating services using 5G systems in various fields, e.g., manufacturing, automotive, health care, etc. 5G use cases include transmission of small packets using IoT devices to high data rate transmission such as high-definition video streaming. When a large-scale IoT device transmits a small packet, power saving is important, so it is necessary to disconnect from the base station and then establish a connection through random access to transmit data. However, existing random access procedures are difficult to satisfy various latency requirements. It is attractive to use a wide bandwidth of the millimeter wave spectrum for high data rate transmission. In order to overcome the channel characteristics, beamforming technology is applied. However, when determining a beam pair between a transmitter and a receiver, interference is not considered. In this dissertation, we consider the following three enhancements to enable 5G and beyond use cases: (i) Two-step random access procedure for delay-sensitive devices, (ii) self-uplink synchronization framework for solving preamble collision problem, and (iii) interference-aware beam adjustment for interference coordination. First, RAPID, two-step random access for delay-sensitive devices, is proposed to reduce latency requirement value for satisfying specific reliability. When devices, performing RAPID and contention-based random access, coexist, it is important to determine a value that is the number of preambles for RAPID to reduce random access load. Simulation results show that RAPID achieves 99.999% reliability with 80.8% shorter uplink latency, and also decreases random access load by 30.5% compared with state-of-the-art techniques. Second, in order to solve preamble collision problem, we develop self-uplink synchronization framework called EsTA. Preamble collision occurs when multiple devices transmit the same preamble. Specifically, we propose a framework that helps the UE to estimate the timing advance command using a deep neural network model and to determine the TA value. Estimation accuracy can achieve 98–99% when subcarrier spacing is 30 and 60 kHz. Finally, we propose IBA, which is interference-aware beam adjustment method to reduce interference in millimeter wave networks. Unlike existing methods of reducing interference by scheduling time and frequency resources differently, interference is controlled through beam adjustment. In IBA, it is important to reduce search space of finding new beam pair to reduce interference. In practical, it is impossible to search beam pair of all combinations. Therefore, through Monte Carlo method, we can reduce search space to achieve local optimum. IBA achieve enhancement of lower 50%throughput up to 50% compared with only applying beam adjustment. In summary, we propose a two-step random access, a self-uplink synchronization framework, and interference-aware beam adjustment for 5G and beyond use cases. Through these researches, we achieve enhancements of network performance such as latency and throughput compared with state-of-the-art techniques.최근 사업자는 제조, 자동차, 헬스 케어 등 다양한 분야에서 5G 시스템을 사용하여 서비스를 만들고 있다. 5G 사용 사례에는 IoT 장치를 이용한 작은 패킷 전송에서고화질 비디오 스트리밍과 같은 고속 데이터 전송까지 포함된다. 대규모 IoT 장치가작은 패킷을 전송하는 경우 전력 소모 절약이 중요하므로 기지국과의 연결을 끊은다음 랜덤 액세스를 통해 다시 기지국과 연결하여 데이터를 전송해야한다. 그러나기존의 랜덤 액세스 절차는 다양한 지연시간 요건을 만족시키기 어렵다. 한편, 높은데이터 전송 속도를 위해 넓은 대역폭의 밀리미터파 대역을 사용한다. 이때, 밀리미터파 대역 채널 특성을 극복하기 위해 빔포밍 기술이 적용된다. 그러나 현재 5G표준에서 송신기와 수신기 사이의 빔 쌍을 결정할 때, 간섭은 고려되지 않는다. 이논문에서는 5G 및 그 이후의 네트워크에서 다양한 사용 사례를 지원하기 위해 다음세 가지 개선 사항을 고려한다. (i) 지연에 민감한 장치를 위한 2 단계 랜덤 액세스절차, (ii) 프리앰블 충돌 문제를 해결하기 위한 자체 상향링크 동기화 프레임 워크,그리고 (iii) 간섭을 줄이기 위한 간섭 인식 빔 조정이다. 첫째, 지연에 민감한 장치를 위한 2 단계 랜덤 액세스인 RAPID는 특정 신뢰도를 만족시키기 위한 지연시간을 줄이기 위해 제안되었다. RAPID와 경합 기반 랜덤 액세스를 수행하는 장치가 공존할 경우 RAPID가 랜덤 액세스 부하를 줄이기 위해 RAPID를 위해 할당되는 프리앰블 수를 결정하는 것이 중요하다. 시뮬레이션 결과에 따르면 RAPID는 99.999%의신뢰도를 만족시키는 지연시간을 최신 기술에 비해 80.8% 줄이면서, 랜덤 액세스부하를 30.5% 줄인다. 둘째, 프리앰블 충돌 문제를 해결하기 위해 자체 상향링크 동기화 프레임워크인 EsTA를 개발한다. 프리앰블 충돌은 여러 장치가 동일한 프리앰블을 전송할 때 발생한다. 구체적으로, 단말이 심층 신경망 모델을 사용하여 timing advance(TA) command를 추정하고 TA값을 결정하는 프레임 워크를 제안한다. 네트워크 시스템의 부반송파 간격이 30 및 60 kHz 일 때, TA command 추정 정확도는98–99%를 달성 할 수 있다. 마지막으로, 밀리미터파 네트워크에서 간섭을 줄이기 위한 간섭 인식 빔 조정 방법인 IBA를 제안한다. 시간과 주파수 자원을 다르게 예약하여 간섭을 줄이는 기존의 방법과 달리 IBA는 빔 조정을 통해 간섭을 제어한다.이 때, 간섭을 줄이기 위해 새로운 빔 쌍을 찾는 검색 공간을 줄이는 것이 중요하다.현실적으로 모든 빔 쌍의 조합을 검색하는 것은 불가능하다. 따라서 IBA는 Monte Carlo 방법을 통해 검색 공간을 축소하여 local optimum을 달성하도록 설계되어야한다. IBA는 5G 표준의 빔 조정 방법과 비교했을 때, 하위 50% throughput의 중간값이최대 50%까지 향상된다. 요약하면, 우리는 5G 및 그 이후의 다양한 사용 사례를 위해서 2 단계 랜덤 액세스, 자체 상향링크 동기화 프레임 워크, 그리고 간섭 인식 빔조정 방법을 제안한다. 이 연구를 통해 최신 기술에 비해 지연시간 및 처리량과 같은네트워크 성능이 향상된다.1 Introduction 1 1.1 5G Vision, Applications, and Keywords 1 1.2 Overview of Existing Approach 3 1.3 Main Contributions 4 1.3.1 RAPID: Two-Step Random Access 4 1.3.2 EsTA: Self-Uplink Synchronization 5 1.3.3 IBA: Interference-Aware Beam Adjustment 5 1.4 Organization of the Dissertation 6 2 RAPID: Contention Resolution-based Random Access Procedure using Context ID for IoT 7 2.1 Introduction 7 2.2 Background 10 2.2.1 RRC State 10 2.2.2 Random Access Procedure 11 2.2.3 Uplink Latency in RRC INACTIVE State 13 2.2.4 Related Work 14 2.3 RAPID: Proposed Random Access Procedure 15 2.3.1 Overview 15 2.3.2 Criterion of Applying RAPID 16 2.3.3 Preamble Set and RACH Period Allocation 17 2.3.4 Preamble Transmission 18 2.3.5 RAR Transmission 19 2.3.6 AS Context ID Allocation 21 2.3.7 Number of Preambles for RAPID 22 2.4 Access Pattern Analyzer 22 2.4.1 Overview 22 2.4.2 APA Operation 23 2.4.3 Margin Value 26 2.4.4 Offset Index Decision 26 2.5 Random Access Load Analysis 27 2.5.1 System Model 28 2.5.2 Markov Chain Model for 4-Step RA 29 2.5.3 Average Random Access Load for 4-Step RA 34 2.5.4 Markov Chain Model for RAPID 34 2.5.5 Average Random Access Load for RAPID 37 2.5.6 Validation of Analysis 38 2.5.7 Optimization Problem 41 2.6 Performance Evaluation 42 2.6.1 Simulation Setup 42 2.6.2 Number of Preambles for RAPID 43 2.6.3 Performance of RAPID 43 2.6.4 Performance of APA 48 2.7 Summary 48 3 EsTA: Self-Uplink Synchronization in 2-Step Random Access 49 3.1 Introduction 49 3.2 Background 51 3.2.1 Overview of 2-Step CBRA 51 3.2.2 Channel Structure for msgA 52 3.2.3 TA Handling for the Payload 54 3.2.4 2-Step Random Access in Recent Literature 56 3.3 Challenges of 2-Step Random Access 57 3.3.1 Preamble Allocation 57 3.3.2 Resource Mapping for msgA 58 3.3.3 DFT Operation in gNB 58 3.3.4 Detected Collision Problem 58 3.4 EsTA: Proposed Self-UL Synchronization Procedure 59 3.4.1 Overview 60 3.4.2 Overall Procedures 60 3.4.3 Performance Evaluation 61 3.4.4 Future Research Perspectives 65 3.5 Summary 65 4 IBA: Interference-Aware Beam Adjustment for 5G mmWave Networks 67 4.1 Introduction 67 4.2 Background 68 4.2.1 Beam Management in 5G NR 68 4.2.2 System-Level Simulation and 3D Beamforming for 5G NR 70 4.3 Motivation 70 4.3.1 Throughput Degradation by Interference 70 4.4 IBA: Proposed Interference Management Scheme 72 4.4.1 Overall Procedure 72 4.4.2 Reduction of Search Space 72 4.4.3 Algorithm for IBA 75 4.5 Performance Evaluation 76 4.6 Summary 78 5 Concluding Remarks 79 5.1 Research Contributions 79 5.2 Future Work 80 Abstract (In Korean) 89 감사의 글 92Docto
    corecore