478 research outputs found

    Relaying in the Internet of Things (IoT): A Survey

    Get PDF
    The deployment of relays between Internet of Things (IoT) end devices and gateways can improve link quality. In cellular-based IoT, relays have the potential to reduce base station overload. The energy expended in single-hop long-range communication can be reduced if relays listen to transmissions of end devices and forward these observations to gateways. However, incorporating relays into IoT networks faces some challenges. IoT end devices are designed primarily for uplink communication of small-sized observations toward the network; hence, opportunistically using end devices as relays needs a redesign of both the medium access control (MAC) layer protocol of such end devices and possible addition of new communication interfaces. Additionally, the wake-up time of IoT end devices needs to be synchronized with that of the relays. For cellular-based IoT, the possibility of using infrastructure relays exists, and noncellular IoT networks can leverage the presence of mobile devices for relaying, for example, in remote healthcare. However, the latter presents problems of incentivizing relay participation and managing the mobility of relays. Furthermore, although relays can increase the lifetime of IoT networks, deploying relays implies the need for additional batteries to power them. This can erode the energy efficiency gain that relays offer. Therefore, designing relay-assisted IoT networks that provide acceptable trade-offs is key, and this goes beyond adding an extra transmit RF chain to a relay-enabled IoT end device. There has been increasing research interest in IoT relaying, as demonstrated in the available literature. Works that consider these issues are surveyed in this paper to provide insight into the state of the art, provide design insights for network designers and motivate future research directions

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    Design of Network Coding Schemes and RF Energy Transfer in Wireless Communication Networks

    Get PDF
    This thesis focuses on the design of network coding schemes and radio frequency (RF) energy transfer in wireless communication networks. During the past few years, network coding has attracted significant attention because of its capability to transmit maximum possible information in a network from multiple sources to multiple destinations via a relay. Normally, the destinations are only able to decode the information with sufficient prior knowledge. To enable the destinations to decode the information in the cases with less/no prior knowledge, a pattern of nested codes with multiple interpretations using binary convolutional codes is constructed in a multi-source multi-destination wireless relay network. Then, I reconstruct nested codes with convolutional codes and lattice codes in multi-way relay channels to improve the spectrum efficiency. Moreover, to reduce the high decoding complexity caused by the adopted convolutional codes, a network coded non-binary low-density generator matrix (LDGM) code structure is proposed for a multi-access relay system. Another focus of this thesis is on the design of RF-enabled wireless energy transfer (WET) schemes. Much attention has been attracted by RF-enabled WET technology because of its capability enabling wireless devices to harvest energy from wireless signals for their intended applications. I first configure a power beacon (PB)-assisted wireless-powered communication network (PB-WPCN), which consists of a set of hybrid access point (AP)-source pairs and a PB. Both cooperative and non-cooperative scenarios are considered, based on whether the PB is cooperative with the APs or not. Besides, I develop a new distributed power control scheme for a power splitting-based interference channel (IFC) with simultaneous wireless information and power transfer (SWIPT), where the considered IFC consists of multiple source-destination pairs

    Reinforcement and deep reinforcement learning for wireless Internet of Things: A survey

    Get PDF
    International audienceNowadays, many research studies and industrial investigations have allowed the integration of the Internet of Things (IoT) in current and future networking applications by deploying a diversity of wireless-enabled devices ranging from smartphones, wearables, to sensors, drones, and connected vehicles. The growing number of IoT devices, the increasing complexity of IoT systems, and the large volume of generated data have made the monitoring and management of these networks extremely difficult. Numerous research papers have applied Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) techniques to overcome these difficulties by building IoT systems with effective and dynamic decision-making mechanisms, dealing with incomplete information related to their environments. The paper first reviews pre-existing surveys covering the application of RL and DRL techniques in IoT communication technologies and networking. The paper then analyzes the research papers that apply these techniques in wireless IoT to resolve issues related to routing, scheduling, resource allocation, dynamic spectrum access, energy, mobility, and caching. Finally, a discussion of the proposed approaches and their limits is followed by the identification of open issues to establish grounds for future research directions proposal

    IRS-aided UAV for Future Wireless Communications: A Survey and Research Opportunities

    Full text link
    Both unmanned aerial vehicles (UAVs) and intelligent reflecting surfaces (IRS) are gaining traction as transformative technologies for upcoming wireless networks. The IRS-aided UAV communication, which introduces IRSs into UAV communications, has emerged in an effort to improve the system performance while also overcoming UAV communication constraints and issues. The purpose of this paper is to provide a comprehensive overview of IRSassisted UAV communications. First, we provide five examples of how IRSs and UAVs can be combined to achieve unrivaled potential in difficult situations. The technological features of the most recent relevant researches on IRS-aided UAV communications from the perspective of the main performance criteria, i.e., energy efficiency, security, spectral efficiency, etc. Additionally, previous research studies on technology adoption as machine learning algorithms. Lastly, some promising research directions and open challenges for IRS-aided UAV communication are presented

    Power Allocation in Wireless Relay Networks

    Get PDF
    This thesis is mainly concerned with power allocation issues in wireless relay networks where a single or multiple relays assist transmission from a single or multiple sources to a destination. First, a network model with a single source and multiple relays is considered, in which both cases of orthogonal and non--orthogonal relaying are investigated. For the case of orthogonal relaying, two power allocation schemes corresponding to two partial channel state information (CSI) assumptions are proposed. Given the lack of full and perfect CSI, appropriate signal processing at the relays and/or destination is also developed. The performance behavior of the system with power allocation between the source and the relays is also analyzed. For the case of non-orthogonal relaying, it is demonstrated that optimal power allocation is not sufficiently effective. Instead, a relay beamforming scheme is proposed. A comprehensive comparison between the orthogonal relaying with power allocation scheme and the non-orthogonal relaying with beamforming scheme is then carried out, which reveals several interesting conclusions with respect to both error performance and system throughput. In the second part of the thesis, a network model with multiple sources and a single relay is considered. The transmission model is applicable for uplink channels in cellular mobile systems in which multiple mobile terminals communicate with the base station with the help of a single relay station. Single-carrier frequency division multiple access (SC-FDMA) technique with frequency domain equalization is adopted in order to avoid the amplification of the multiple access interference at the relay. Minimizing the transmit power at the relay and optimizing the fairness among the sources in terms of throughput are the two objectives considered in implementing power allocation schemes. The problems are visualized as water-filling and water-discharging models and two optimal power allocation schemes are proposed, accordingly. Finally, the last part of the thesis is extended to a network model with multiple sources and multiple relays. The orthogonal multiple access technique is employed in order to avoid multiple access interference. Proposed is a joint optimal beamforming and power allocation scheme in which an alternative optimization technique is applied to deal with the non-convexity of the power allocation problem. Furthermore, recognizing the high complexity and large overhead information exchange when the number of sources and relays increases, a relay selection scheme is proposed. Since each source is supported by at most one relay, the feedback information from the destination to each relay can be significantly reduced. Using an equal power allocation scheme, relay selection is still an NP-hard combinatorial optimization problem. Nevertheless, the proposed sub-optimal scheme yields a comparable performance with a much lower computational complexity and can be well suited for practical systems
    • …
    corecore