109 research outputs found

    Wireless Information and Energy Transfer for Two-Hop Non-Regenerative MIMO-OFDM Relay Networks

    Full text link
    This paper investigates the simultaneous wireless information and energy transfer for the non-regenerative multipleinput multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) relaying system. By considering two practical receiver architectures, we present two protocols, time switchingbased relaying (TSR) and power splitting-based relaying (PSR). To explore the system performance limit, we formulate two optimization problems to maximize the end-to-end achievable information rate with the full channel state information (CSI) assumption. Since both problems are non-convex and have no known solution method, we firstly derive some explicit results by theoretical analysis and then design effective algorithms for them. Numerical results show that the performances of both protocols are greatly affected by the relay position. Specifically, PSR and TSR show very different behaviors to the variation of relay position. The achievable information rate of PSR monotonically decreases when the relay moves from the source towards the destination, but for TSR, the performance is relatively worse when the relay is placed in the middle of the source and the destination. This is the first time to observe such a phenomenon. In addition, it is also shown that PSR always outperforms TSR in such a MIMO-OFDM relaying system. Moreover, the effect of the number of antennas and the number of subcarriers are also discussed.Comment: 16 pages, 12 figures, to appear in IEEE Selected Areas in Communication

    Energy-efficient non-orthogonal multiple access for wireless communication system

    Get PDF
    Non-orthogonal multiple access (NOMA) has been recognized as a potential solution for enhancing the throughput of next-generation wireless communications. NOMA is a potential option for 5G networks due to its superiority in providing better spectrum efficiency (SE) compared to orthogonal multiple access (OMA). From the perspective of green communication, energy efficiency (EE) has become a new performance indicator. A systematic literature review is conducted to investigate the available energy efficient approach researchers have employed in NOMA. We identified 19 subcategories related to EE in NOMA out of 108 publications where 92 publications are from the IEEE website. To help the reader comprehend, a summary for each category is explained and elaborated in detail. From the literature review, it had been observed that NOMA can enhance the EE of wireless communication systems. At the end of this survey, future research particularly in machine learning algorithms such as reinforcement learning (RL) and deep reinforcement learning (DRL) for NOMA are also discussed

    Ieee access special section editorial: Cloud and big data-based next-generation cognitive radio networks

    Get PDF
    In cognitive radio networks (CRN), secondary users (SUs) are required to detect the presence of the licensed users, known as primary users (PUs), and to find spectrum holes for opportunistic spectrum access without causing harmful interference to PUs. However, due to complicated data processing, non-real-Time information exchange and limited memory, SUs often suffer from imperfect sensing and unreliable spectrum access. Cloud computing can solve this problem by allowing the data to be stored and processed in a shared environment. Furthermore, the information from a massive number of SUs allows for more comprehensive information exchanges to assist the

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Reinforcement Learning Based Resource Allocation for Energy-Harvesting-Aided D2D Communications in IoT Networks

    Get PDF
    It is anticipated that mobile data traffic and the demand for higher data rates will increase dramatically as a result of the explosion of wireless devices, such as the Internet of Things (IoT) and machine-to-machine communication. There are numerous location-based peer-to-peer services available today that allow mobile users to communicate directly with one another, which can help offload traffic from congested cellular networks. In cellular networks, Device-to-Device (D2D) communication has been introduced to exploit direct links between devices instead of transmitting through a the Base Station (BS). However, it is critical to note that D2D and IoT communications are hindered heavily by the high energy consumption of mobile devices and IoT devices. This is because their battery capacity is restricted. There may be a way for energy-constrained wireless devices to extend their lifespan by drawing upon reusable external sources of energy such as solar, wind, vibration, thermoelectric, and radio frequency (RF) energy in order to overcome the limited battery problem. Such approaches are commonly referred to as Energy Harvesting (EH) There is a promising approach to energy harvesting that is called Simultaneous Wireless Information and Power Transfer (SWIPT). Due to the fact that wireless users are on the rise, it is imperative that resource allocation techniques be implemented in modern wireless networks. This will facilitate cooperation among users for limited resources, such as time and frequency bands. As well as ensuring that there is an adequate supply of energy for reliable and efficient communication, resource allocation also provides a roadmap for each individual user to follow in order to consume the right amount of energy. In D2D networks with time, frequency, and power constraints, significant computing power is generally required to achieve a joint resource management design. Thus the purpose of this study is to develop a resource allocation scheme that is based on spectrum sharing and enables low-cost computations for EH-assisted D2D and IoT communication. Until now, there has been no study examining resource allocation design for EH-enabled IoT networks with SWIPT-enabled D2D schemes that utilize learning techniques and convex optimization. In most of the works, optimization and iterative approaches with a high level of computational complexity have been used which is not feasible in many IoT applications. In order to overcome these obstacles, a learning-based resource allocation mechanism based on the SWIPT scheme in IoT networks is proposed, where users are able to harvest energy from different sources. The system model consists of multiple IoT users, one BS, and multiple D2D pairs in EH-based IoT networks. As a means of developing an energy-efficient system, we consider the SWIPT scheme with D2D pairs employing the time switching method (TS) to capture energy from the environment, whereas IoT users employ the power splitting method (PS) to harvest energy from the BS. A mixed-integer nonlinear programming (MINLP) approach is presented for the solution of the Energy Efficiency (EE) problem by jointly optimizing subchannel allocation, power-splitting factor, power, and time together. As part of the optimization approach, the original EE optimization problem is decomposed into three subproblems, namely: (a) subchannel assignment and power splitting factor, (b) power allocation, and (c) time allocation. In order to solve the subproblem assignment problem, which involves discrete variables, the Q-learning approach is employed. Due to the large size of the overall problem and the continuous nature of certain variables, it is impractical to optimize all variables by using the learning technique. Instead dealing for the continuous variable problems, namely power and time allocation, the original non-convex problem is first transformed into a convex one, then the Majorization-Minimization (MM) approach is applied as well as the Dinkelbach. The performance of the proposed joint Q-learning and optimization algorithm has been evaluated in detail. In particular, the solution was compared with a linear EH model, as well as two heuristic algorithms, namely the constrained allocation algorithm and the random allocation algorithm, in order to determine its performance. The results indicate that the technique is superior to conventional approaches. For example, it can be seen that for the distance of d=10d = 10 m, our proposed algorithm leads to EE improvement when compared to the method such as prematching algorithm, constrained allocation, and random allocation methods by about 5.26\%, 110.52\%, and 143.90\%, respectively. Considering the simulation results, the proposed algorithm is superior to other methods in the literature. Using spectrum sharing and harvesting energy from D2D and IoT devices achieves impressive EE gains. This superior performance can be seen both in terms of the average and sum EEs, as well as when compared to other baseline schemes
    • …
    corecore