160 research outputs found

    Multiuser Diversity Management for Multicast/Broadcast Services in 5G and Beyond Networks

    Get PDF
    The envisaged fifth-generation (5G) and beyond networks represent a paradigm shift for global communications, offering unprecedented breakthroughs in media service delivery with novel capabilities and use cases. Addressing the critical research verticals and challenges that characterize the International Mobile Telecommunications (IMT)-2030 framework requires a compelling mix of enabling radio access technologies (RAT) and native softwarized, disaggregated, and intelligent radio access network (RAN) conceptions. In such a context, the multicast/broadcast ser vice (MBS) capability is an appealing feature to address the ever-growing traffic demands, disruptive multimedia services, massive connectivity, and low-latency applications. Embracing the MBS capability as a primary component of the envisaged 5G and beyond networks comes with multiple open challenges. In this research, we contextualize and address the necessity of ensuring stringent quality of service (QoS)/quality of experience (QoE) requirements, multicasting over millimeter-wave (mmWave) and sub-Terahertz (THz) frequencies, and handling complex mobility behaviors. In the broad problem space around these three significant challenges, we focus on the specific research problems of effectively handling the trade-off between multicasting gain and multiuser diversity, along with the trade-off between optimal network performance and computational complexity. In this research, we cover essential aspects at the intersection of MBS, radio resource management (RRM), machine learning (ML), and the Open RAN (O-RAN) framework. We characterize and address the dynamic multicast multiuser diversity through low-complexity RRM solutions aided by ML, orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) techniques in 5G MBS and beyond networks. We characterize the performance of the multicast access techniques conventional multicast scheme (CMS), subgrouping based on OMA (S-OMA), and subgrouping based on NOMA (S-NOMA). We provide conditions for their adequate selection regarding the specific network conditions (Chapter 4). Consequently, we propose heuristic methods for the dynamic multicast access technique selection and resource allocation, taking advantage of the multiuser diversity (Chapter 5.1). Moreover, we proposed a multicasting strategy based on fixed pre-computed multiple-input multiple-output (MIMO) multi-beams and S-NOMA (Chapter 5.2). Our approach tackles specific throughput requirements for enabling extended reality (XR) applications attending multiple users and handling their spatial and channel quality diversity. We address the computational complexity (CC) associated with the dynamic multicast RRM strategies and highlight the implications of fast variations in the reception conditions of the multicast group (MG) members. We propose a low complexity ML-based solution structured by a multicast-oriented trigger to avoid overrunning the algorithm, a K-Means clustering for group-oriented detection and splitting, and a classifier for selecting the most suitable multicast access technique (Chapter 6.1). Our proposed approaches allow addressing the trade-off between optimal network performance and CC by maximizing specific QoS parameters through non-optimal solutions, considerably reducing the CC of conventional exhaustive mechanisms. Moreover, we discuss the insertion of ML-based multicasting RRM solutions into the envisioned disaggregated O-RAN framework (Chapter 6.2.5). We analyze specific MBS tasks and the importance of a native decentralized, softwarized, and intelligent conception. We assess the effectiveness of our proposal under multiple numerical and link level simulations of recreated 5G MBS use cases operating in μWave and mmWave. We evaluate various network conditions, service constraints, and users’ mobility behaviors

    A Tutorial on Clique Problems in Communications and Signal Processing

    Full text link
    Since its first use by Euler on the problem of the seven bridges of K\"onigsberg, graph theory has shown excellent abilities in solving and unveiling the properties of multiple discrete optimization problems. The study of the structure of some integer programs reveals equivalence with graph theory problems making a large body of the literature readily available for solving and characterizing the complexity of these problems. This tutorial presents a framework for utilizing a particular graph theory problem, known as the clique problem, for solving communications and signal processing problems. In particular, the paper aims to illustrate the structural properties of integer programs that can be formulated as clique problems through multiple examples in communications and signal processing. To that end, the first part of the tutorial provides various optimal and heuristic solutions for the maximum clique, maximum weight clique, and kk-clique problems. The tutorial, further, illustrates the use of the clique formulation through numerous contemporary examples in communications and signal processing, mainly in maximum access for non-orthogonal multiple access networks, throughput maximization using index and instantly decodable network coding, collision-free radio frequency identification networks, and resource allocation in cloud-radio access networks. Finally, the tutorial sheds light on the recent advances of such applications, and provides technical insights on ways of dealing with mixed discrete-continuous optimization problems

    Energy Efficiency Optimization of Massive MIMO Systems Based on the Particle Swarm Optimization Algorithm

    Get PDF
    As one of the key technologies in the fifth generation of mobile communications, massive multi-input multi-output (MIMO) can improve system throughput and transmission reliability. However, if all antennas are used to transmit data, the same number of radio-frequency chains is required, which not only increases the cost of system but also reduces the energy efficiency (EE). To solve these problems, in this paper, we propose an EE optimization based on the particle swarm optimization (PSO) algorithm. First, we consider the base station (BS) antennas and terminal users, and analyze their impact on EE in the uplink and downlink of a single-cell multiuser massive MIMO system. Second, a dynamic power consumption model is used under zero-forcing processing, and it obtains the expression of EE that is used as the fitness function of the PSO algorithm under perfect and imperfect channel state information (CSI) in single-cell scenarios and imperfect CSI in multicell scenarios. Finally, the optimal EE value is obtained by updating the global optimal positions of the particles. The simulation results show that compared with the traditional iterative algorithm and artificial bee colony algorithm, the proposed algorithm not only possesses the lowest complexity but also obtains the highest optimal value of EE under the single-cell perfect CSI scenario. In the single-cell and multicell scenarios with imperfect CSI, the proposed algorithm is capable of obtaining the same or slightly lower optimal EE value than that of the traditional iterative algorithm, but the running time is at most only 1/12 of that imposed by the iterative algorithm

    Contributions to the development of the CRO-SL algorithm: Engineering applications problems

    Get PDF
    This Ph.D. thesis discusses advanced design issues of the evolutionary-based algorithm \textit{"Coral Reef Optimization"}, in its Substrate-Layer (CRO-SL) version, for optimization problems in Engineering Applications. The problems that can be tackled with meta-heuristic approaches is very wide and varied, and it is not exclusive of engineering. However we focus the Thesis on it area, one of the most prominent in our time. One of the proposed application is battery scheduling problem in Micro-Grids (MGs). Specifically, we consider an MG that includes renewable distributed generation and different loads, defined by its power profiles, and is equipped with an energy storage device (battery) to address its programming (duration of loading / discharging and occurrence) in a real scenario with variable electricity prices. Also, we discuss a problem of vibration cancellation over structures of two and four floors, using Tuned Mass Dampers (TMD's). The optimization algorithm will try to find the best solution by obtaining three physical parameters and the TMD location. As another related application, CRO-SL is used to design Multi-Input-Multi-Output Active Vibration Control (MIMO-AVC) via inertial-mass actuators, for structures subjected to human induced vibration. In this problem, we will optimize the location of each actuator and tune control gains. Finally, we tackle the optimization of a textile modified meander-line Inverted-F Antenna (IFA) with variable width and spacing meander, for RFID systems. Specifically, the CRO-SL is used to obtain an optimal antenna design, with a good bandwidth and radiation pattern, ideal for RFID readers. Radio Frequency Identification (RFID) has become one of the most numerous manufactured devices worldwide due to a reliable and inexpensive means of locating people. They are used in access and money cards and product labels and many other applications.Comment: arXiv admin note: text overlap with arXiv:1806.02654 by other author

    Power Beacon’s deployment optimization for wirelessly powering massive Internet of Things networks

    Get PDF
    Abstract. The fifth-generation (5G) and beyond wireless cellular networks promise the native support to, among other use cases, the so-called Internet of Things (IoT). Different from human-based cellular services, IoT networks implement a novel vision where ordinary machines possess the ability to autonomously sense, actuate, compute, and communicate throughout the Internet. However, as the number of connected devices grows larger, an urgent demand for energy-efficient communication technologies arises. A key challenge related to IoT devices is that their very small form factor allows them to carry just a tiny battery that might not be even possible to replace due to installation conditions, or too costly in terms of maintenance because of the massiveness of the network. This issue limits the lifetime of the network and compromises its reliability. Wireless energy transfer (WET) has emerged as a potential candidate to replenish sensors’ batteries or to sustain the operation of battery-free devices, as it provides a controllable source of energy over-the-air. Therefore, WET eliminates the need for regular maintenance, allows sensors’ form factor reduction, and reduces the battery disposal that contributes to the environment pollution. In this thesis, we review some WET-enabled scenarios and state-of-the-art techniques for implementing WET in IoT networks. In particular, we focus our attention on the deployment optimization of the so-called power beacons (PBs), which are the energy transmitters for charging a massive IoT deployment subject to a network-wide probabilistic energy outage constraint. We assume that IoT sensors’ positions are unknown at the PBs, and hence we maximize the average incident power on the worst network location. We propose a linear-time complexity algorithm for optimizing the PBs’ positions that outperforms benchmark methods in terms of minimum average incident power and computation time. Then, we also present some insights on the maximum coverage area under certain propagation conditions

    A Review of Wireless Sensor Networks with Cognitive Radio Techniques and Applications

    Get PDF
    The advent of Wireless Sensor Networks (WSNs) has inspired various sciences and telecommunication with its applications, there is a growing demand for robust methodologies that can ensure extended lifetime. Sensor nodes are small equipment which may hold less electrical energy and preserve it until they reach the destination of the network. The main concern is supposed to carry out sensor routing process along with transferring information. Choosing the best route for transmission in a sensor node is necessary to reach the destination and conserve energy. Clustering in the network is considered to be an effective method for gathering of data and routing through the nodes in wireless sensor networks. The primary requirement is to extend network lifetime by minimizing the consumption of energy. Further integrating cognitive radio technique into sensor networks, that can make smart choices based on knowledge acquisition, reasoning, and information sharing may support the network's complete purposes amid the presence of several limitations and optimal targets. This examination focuses on routing and clustering using metaheuristic techniques and machine learning because these characteristics have a detrimental impact on cognitive radio wireless sensor node lifetime

    A Vision and Framework for the High Altitude Platform Station (HAPS) Networks of the Future

    Full text link
    A High Altitude Platform Station (HAPS) is a network node that operates in the stratosphere at an of altitude around 20 km and is instrumental for providing communication services. Precipitated by technological innovations in the areas of autonomous avionics, array antennas, solar panel efficiency levels, and battery energy densities, and fueled by flourishing industry ecosystems, the HAPS has emerged as an indispensable component of next-generations of wireless networks. In this article, we provide a vision and framework for the HAPS networks of the future supported by a comprehensive and state-of-the-art literature review. We highlight the unrealized potential of HAPS systems and elaborate on their unique ability to serve metropolitan areas. The latest advancements and promising technologies in the HAPS energy and payload systems are discussed. The integration of the emerging Reconfigurable Smart Surface (RSS) technology in the communications payload of HAPS systems for providing a cost-effective deployment is proposed. A detailed overview of the radio resource management in HAPS systems is presented along with synergistic physical layer techniques, including Faster-Than-Nyquist (FTN) signaling. Numerous aspects of handoff management in HAPS systems are described. The notable contributions of Artificial Intelligence (AI) in HAPS, including machine learning in the design, topology management, handoff, and resource allocation aspects are emphasized. The extensive overview of the literature we provide is crucial for substantiating our vision that depicts the expected deployment opportunities and challenges in the next 10 years (next-generation networks), as well as in the subsequent 10 years (next-next-generation networks).Comment: To appear in IEEE Communications Surveys & Tutorial

    A Comprehensive Overview on 5G-and-Beyond Networks with UAVs: From Communications to Sensing and Intelligence

    Full text link
    Due to the advancements in cellular technologies and the dense deployment of cellular infrastructure, integrating unmanned aerial vehicles (UAVs) into the fifth-generation (5G) and beyond cellular networks is a promising solution to achieve safe UAV operation as well as enabling diversified applications with mission-specific payload data delivery. In particular, 5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine-type communications (mMTC). On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in three-dimensional (3D) space. On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference. Besides the requirement of high-performance wireless communications, the ability to support effective and efficient sensing as well as network intelligence is also essential for 5G-and-beyond 3D heterogeneous wireless networks with coexisting aerial and ground users. In this paper, we provide a comprehensive overview of the latest research efforts on integrating UAVs into cellular networks, with an emphasis on how to exploit advanced techniques (e.g., intelligent reflecting surface, short packet transmission, energy harvesting, joint communication and radar sensing, and edge intelligence) to meet the diversified service requirements of next-generation wireless systems. Moreover, we highlight important directions for further investigation in future work.Comment: Accepted by IEEE JSA

    A Survey and Future Directions on Clustering: From WSNs to IoT and Modern Networking Paradigms

    Get PDF
    Many Internet of Things (IoT) networks are created as an overlay over traditional ad-hoc networks such as Zigbee. Moreover, IoT networks can resemble ad-hoc networks over networks that support device-to-device (D2D) communication, e.g., D2D-enabled cellular networks and WiFi-Direct. In these ad-hoc types of IoT networks, efficient topology management is a crucial requirement, and in particular in massive scale deployments. Traditionally, clustering has been recognized as a common approach for topology management in ad-hoc networks, e.g., in Wireless Sensor Networks (WSNs). Topology management in WSNs and ad-hoc IoT networks has many design commonalities as both need to transfer data to the destination hop by hop. Thus, WSN clustering techniques can presumably be applied for topology management in ad-hoc IoT networks. This requires a comprehensive study on WSN clustering techniques and investigating their applicability to ad-hoc IoT networks. In this article, we conduct a survey of this field based on the objectives for clustering, such as reducing energy consumption and load balancing, as well as the network properties relevant for efficient clustering in IoT, such as network heterogeneity and mobility. Beyond that, we investigate the advantages and challenges of clustering when IoT is integrated with modern computing and communication technologies such as Blockchain, Fog/Edge computing, and 5G. This survey provides useful insights into research on IoT clustering, allows broader understanding of its design challenges for IoT networks, and sheds light on its future applications in modern technologies integrated with IoT.acceptedVersio
    • …
    corecore