1,022 research outputs found

    Data Collection and Information Freshness in Energy Harvesting Networks

    Get PDF
    An Internet of Things (IoT) network consists of multiple devices with sensor(s), and one or more access points or gateways. These devices monitor and sample targets, such as valuable assets, before transmitting their samples to an access point or the cloud for storage or/and analysis. A critical issue is that devices have limited energy, which constrains their operational lifetime. To this end, researchers have proposed various solutions to extend the lifetime of devices. A popular solution involves optimizing the duty cycle of devices; equivalently, the ratio of their active and inactive/sleep time. Another solution is to employ energy harvesting technologies. Specifically, devices rely on one or more energy sources such as wind, solar or Radio Frequency (RF) signals to power their operations. Apart from energy, another fundamental problem is the limited spectrum shared by devices. This means they must take turns to transmit to a gateway. Equivalently, they need a transmission schedule that determines when they transmit their samples to a gateway. To this end, this thesis addresses three novel device/sensor selection problems. It first aims to determine the best devices to transmit in each time slot in an RF Energy-Harvesting Wireless Sensor Network (EH-WSN) in order to maximize throughput or sum-rate. Briefly, a Hybrid Access Point (HAP) is responsible for charging devices via downlink RF energy transfer. After that, the HAP selects a subset of devices to transmit their data. A key challenge is that the HAP has neither channel state information nor energy level information of device. In this respect, this thesis outlines two centralized algorithms that are based on cross-entropy optimization and Gibbs sampling. Next, this thesis considers information freshness when selecting devices, where the HAP aims to minimize the average Age of Information (AoI) of samples from devices. Specifically, the HAP must select devices to sample and transmit frequently. Further, it must select devices without channel state information. To this end, this thesis outlines a decentralized Q-learning algorithm that allows the HAP to select devices according to their AoI. Lastly, this thesis considers targets with time-varying states. As before, the aim is to determine the best set of devices to be active in each frame in order to monitor targets. However, the aim is to optimize a novel metric called the age of incorrect information. Further, devices cooperate with one another to monitor target(s). To choose the best set of devices and minimize the said metric, this thesis proposes two decentralized algorithms, i.e., a decentralized Q-learning algorithm and a novel state space free learning algorithm. Different from the decentralized Q-learning algorithm, the state space free learning algorithm does not require devices to store Q-tables, which record the expected reward of actions taken by devices

    Preallocation-Based Combinatorial Auction for Efficient Fair Channel Assignments in Multi-Connectivity Networks

    Get PDF
    We consider a general multi-connectivity framework, intended for ultra-reliable low-latency communications (URLLC) services, and propose a novel, preallocation-based combinatorial auction approach for the efficient allocation of channels. We compare the performance of the proposed method with several other state-of-the-art and alternative channel-allocation algorithms. The two proposed performance metrics are the capacity-based and the utility-based context. In the first case, every unit of additional capacity is regarded as beneficial for any tenant, independent of the already allocated quantity, and the main measure is the total throughput of the system. In the second case, we assume a minimal and maximal required capacity value for each tenant, and consider the implied utility values accordingly. In addition to the total system performance, we also analyze fairness and computational requirements in both contexts. We conclude that at the cost of higher but still plausible computational time, the fairness-enhanced version of the proposed preallocation based combinatorial auction algorithm outperforms every other considered method when one considers total system performance and fairness simultaneously, and performs especially well in the utility context. Therefore, the proposed algorithm may be regarded as candidate scheme for URLLC channel allocation problems, where minimal and maximal capacity requirements have to be considered

    A Survey of Scheduling in 5G URLLC and Outlook for Emerging 6G Systems

    Get PDF
    Future wireless communication is expected to be a paradigm shift from three basic service requirements of 5th Generation (5G) including enhanced Mobile Broadband (eMBB), Ultra Reliable and Low Latency communication (URLLC) and the massive Machine Type Communication (mMTC). Integration of the three heterogeneous services into a single system is a challenging task. The integration includes several design issues including scheduling network resources with various services. Specially, scheduling the URLLC packets with eMBB and mMTC packets need more attention as it is a promising service of 5G and beyond systems. It needs to meet stringent Quality of Service (QoS) requirements and is used in time-critical applications. Thus through understanding of packet scheduling issues in existing system and potential future challenges is necessary. This paper surveys the potential works that addresses the packet scheduling algorithms for 5G and beyond systems in recent years. It provides state of the art review covering three main perspectives such as decentralised, centralised and joint scheduling techniques. The conventional decentralised algorithms are discussed first followed by the centralised algorithms with specific focus on single and multi-connected network perspective. Joint scheduling algorithms are also discussed in details. In order to provide an in-depth understanding of the key scheduling approaches, the performances of some prominent scheduling algorithms are evaluated and analysed. This paper also provides an insight into the potential challenges and future research directions from the scheduling perspective

    Reinforcement Learning Based Resource Allocation for Energy-Harvesting-Aided D2D Communications in IoT Networks

    Get PDF
    It is anticipated that mobile data traffic and the demand for higher data rates will increase dramatically as a result of the explosion of wireless devices, such as the Internet of Things (IoT) and machine-to-machine communication. There are numerous location-based peer-to-peer services available today that allow mobile users to communicate directly with one another, which can help offload traffic from congested cellular networks. In cellular networks, Device-to-Device (D2D) communication has been introduced to exploit direct links between devices instead of transmitting through a the Base Station (BS). However, it is critical to note that D2D and IoT communications are hindered heavily by the high energy consumption of mobile devices and IoT devices. This is because their battery capacity is restricted. There may be a way for energy-constrained wireless devices to extend their lifespan by drawing upon reusable external sources of energy such as solar, wind, vibration, thermoelectric, and radio frequency (RF) energy in order to overcome the limited battery problem. Such approaches are commonly referred to as Energy Harvesting (EH) There is a promising approach to energy harvesting that is called Simultaneous Wireless Information and Power Transfer (SWIPT). Due to the fact that wireless users are on the rise, it is imperative that resource allocation techniques be implemented in modern wireless networks. This will facilitate cooperation among users for limited resources, such as time and frequency bands. As well as ensuring that there is an adequate supply of energy for reliable and efficient communication, resource allocation also provides a roadmap for each individual user to follow in order to consume the right amount of energy. In D2D networks with time, frequency, and power constraints, significant computing power is generally required to achieve a joint resource management design. Thus the purpose of this study is to develop a resource allocation scheme that is based on spectrum sharing and enables low-cost computations for EH-assisted D2D and IoT communication. Until now, there has been no study examining resource allocation design for EH-enabled IoT networks with SWIPT-enabled D2D schemes that utilize learning techniques and convex optimization. In most of the works, optimization and iterative approaches with a high level of computational complexity have been used which is not feasible in many IoT applications. In order to overcome these obstacles, a learning-based resource allocation mechanism based on the SWIPT scheme in IoT networks is proposed, where users are able to harvest energy from different sources. The system model consists of multiple IoT users, one BS, and multiple D2D pairs in EH-based IoT networks. As a means of developing an energy-efficient system, we consider the SWIPT scheme with D2D pairs employing the time switching method (TS) to capture energy from the environment, whereas IoT users employ the power splitting method (PS) to harvest energy from the BS. A mixed-integer nonlinear programming (MINLP) approach is presented for the solution of the Energy Efficiency (EE) problem by jointly optimizing subchannel allocation, power-splitting factor, power, and time together. As part of the optimization approach, the original EE optimization problem is decomposed into three subproblems, namely: (a) subchannel assignment and power splitting factor, (b) power allocation, and (c) time allocation. In order to solve the subproblem assignment problem, which involves discrete variables, the Q-learning approach is employed. Due to the large size of the overall problem and the continuous nature of certain variables, it is impractical to optimize all variables by using the learning technique. Instead dealing for the continuous variable problems, namely power and time allocation, the original non-convex problem is first transformed into a convex one, then the Majorization-Minimization (MM) approach is applied as well as the Dinkelbach. The performance of the proposed joint Q-learning and optimization algorithm has been evaluated in detail. In particular, the solution was compared with a linear EH model, as well as two heuristic algorithms, namely the constrained allocation algorithm and the random allocation algorithm, in order to determine its performance. The results indicate that the technique is superior to conventional approaches. For example, it can be seen that for the distance of d=10d = 10 m, our proposed algorithm leads to EE improvement when compared to the method such as prematching algorithm, constrained allocation, and random allocation methods by about 5.26\%, 110.52\%, and 143.90\%, respectively. Considering the simulation results, the proposed algorithm is superior to other methods in the literature. Using spectrum sharing and harvesting energy from D2D and IoT devices achieves impressive EE gains. This superior performance can be seen both in terms of the average and sum EEs, as well as when compared to other baseline schemes

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Data Collection in Two-Tier IoT Networks with Radio Frequency (RF) Energy Harvesting Devices and Tags

    Get PDF
    The Internet of things (IoT) is expected to connect physical objects and end-users using technologies such as wireless sensor networks and radio frequency identification (RFID). In addition, it will employ a wireless multi-hop backhaul to transfer data collected by a myriad of devices to users or applications such as digital twins operating in a Metaverse. A critical issue is that the number of packets collected and transferred to the Internet is bounded by limited network resources such as bandwidth and energy. In this respect, IoT networks have adopted technologies such as time division multiple access (TDMA), signal interference cancellation (SIC) and multiple-input multiple-output (MIMO) in order to increase network capacity. Another fundamental issue is energy. To this end, researchers have exploited radio frequency (RF) energy-harvesting technologies to prolong the lifetime of energy constrained sensors and smart devices. Specifically, devices with RF energy harvesting capabilities can rely on ambient RF sources such as access points, television towers, and base stations. Further, an operator may deploy dedicated power beacons that serve as RF-energy sources. Apart from that, in order to reduce energy consumption, devices can adopt ambient backscattering communication technologies. Advantageously, backscattering allows devices to communicate using negligible amount of energy by modulating ambient RF signals. To address the aforementioned issues, this thesis first considers data collection in a two-tier MIMO ambient RF energy-harvesting network. The first tier consists of routers with MIMO capability and a set of source-destination pairs/flows. The second tier consists of energy harvesting devices that rely on RF transmissions from routers for energy supply. The problem is to determine a minimum-length TDMA link schedule that satisfies the traffic demand of source-destination pairs and energy demand of energy harvesting devices. It formulates the problem as a linear program (LP), and outlines a heuristic to construct transmission sets that are then used by the said LP. In addition, it outlines a new routing metric that considers the energy demand of energy harvesting devices to cope with routing requirements of IoT networks. The simulation results show that the proposed algorithm on average achieves 31.25% shorter schedules as compared to competing schemes. In addition, the said routing metric results in link schedules that are at most 24.75% longer than those computed by the LP

    Komunikace na milimetrových vlnách v 5G a dalších sítích: Nové systémové modely a analýza výkonnosti

    Get PDF
    The dissertation investigates different network models, focusing on three important features for next generation cellular networks with respect to millimeter waves (mmWave) communications: the impact of fading and co-channel interference (CCI), energy efficiency, and spectrum efficiency. To address the first aim, the dissertation contains a study of a non-orthogonal multiple access (NOMA) technique in a multi-hop relay network which uses relays that harvest energy from power beacons (PB). This part derives the exact throughput expressions for NOMA and provides a performance analysis of three different NOMA schemes to determine the optimal parameters for the proposed system’s throughput. A self-learning clustering protocol (SLCP) in which a node learns its neighbor’s information is also proposed for determining the node density and the residual energy used to cluster head (CH) selection and improve energy efficiency, thereby prolonging sensor network lifetime and gaining higher throughput. Second, NOMA provides many opportunities for massive connectivity at lower latencies, but it may also cause co-channel interference by reusing frequencies. CCI and fading play a major role in deciding the quality of the received signal. The dissertation takes into account the presence of η and µ fading channels in a network using NOMA. The closed-form expressions of outage probability (OP) and throughput were derived with perfect successive interference cancellation (SIC) and imperfect SIC. The dissertation also addresses the integration of NOMA into a satellite communications network and evaluates its system performance under the effects of imperfect channel state information (CSI) and CCI. Finally, the dissertation presents a new model for a NOMA-based hybrid satellite-terrestrial relay network (HSTRN) using mmWave communications. The satellite deploys the NOMA scheme, whereas the ground relays are equipped with multiple antennas and employ the amplify and forward (AF) protocol. The rain attenuation coefficient is considered as the fading factor of the mmWave band to choose the best relay, and the widely applied hybrid shadowed-Rician and Nakagami-m channels characterize the transmission environment of HSTRN. The closed-form formulas for OP and ergodic capacity (EC) were derived to evaluate the system performance of the proposed model and then verified with Monte Carlo simulations.Dizertační práce zkoumala různé modely sítí a zaměřila se na tři důležité vlastnosti pro buňkové sítě příští generace s ohledem na mmW komunikace, kterými jsou: vliv útlumu a mezikanálového rušení (CCI), energetická účinnost a účinnost spektra. Co se týče prvního cíle, dizertace obsahuje studii techniky neortogonálního vícenásobného přístupu (NOMA) v bezdrátové multiskokové relay síti využívající získávání energie, kde relay uzly sbírají energii z energetických majáků (PB). Tato část přináší přesné výrazy propustnosti pro NOMA a analýzu výkonnosti se třemi různými schématy NOMA s cílem určit optimální parametry pro propustnost navrženého systému. Dále byl navržen samoučící se shlukovací protokol (SLCP), ve kterém se uzel učí informace o sousedech, aby určil hustotu uzlů a zbytkovou energii použitou k výběru hlavy shluku CH pro zlepšení energetické účinnosti, čímž může prodloužit životnost sensorové sítě a zvýšit propustnost. Za druhé, přístup NOMA poskytl mnoho příležitostí pro masivní připojení s nižší latencí, NOMA však může způsobovat mezikanálové rušení v důsledku opětovného využívání kmitočtů. CCI a útlum hrají klíčovou roli při rozhodování o kvalitě přijímaného signálu. V této dizertace je brána v úvahu přítomnost η a µ útlumových kanálů v síti užívající NOMA. Odvozeny jsou výrazy v uzavřené formě pro pravděpodobnost výpadku (OP) a propustnost s dokonalým postupným rušením rušení (SIC) a nedokonalým SIC. Dále se dizertace zabývá integrací přístupu NOMA do satelitní komunikační sítě a vyhodnocuje výkonnost systému při dopadech nedokonalé informace o stavu kanálu (CSI) a CCI. Závěrem disertační práce představuje nový model pro hybridní družicově-terestriální přenosovou síť (HSTRN) založenou na NOMA vícenásobném přístupu využívající mmWave komunikaci. Satelit využívá NOMA schéma, zatímco pozemní relay uzly jsou vybaveny více anténami a aplikují protokol zesilování a předávání (AF). Je zaveden srážkový koeficient, který je uvažován jako útlumový faktor mmWave pásma při výběru nejlepšího relay uzlu. Samotné přenosové prostředí HSTRN je charakterizováno pomocí hybridních Rician a Nakagami-m kanálů. Vztahy pro vyhodnocení výkonnosti systému navrženého modelu vyjadřující ergodickou kapacitu (EC) a pravděpodobnost ztrát (OP) byly odvozeny v uzavřené formě a následně ověřeny pomocí simulační numerické metody Monte Carlo.440 - Katedra telekomunikační technikyvyhově
    corecore