7 research outputs found

    Performance Analysis of 5G Cooperative-NOMA for IoT-Intermittent Communication

    Get PDF
    Non-orthogonal multiple Access (NOMA) is a potential 5G era multiple-access scheme that is proposed for the future mobile Internet and IoT applications which will require enormous increase in data traffic, massive-number of devices connectivity, high spectral efficiency, low-overhead and low-latency. It utilizes the same time-slots, frequency and spreading-codes for all the users. It uses the power-domain and assign different power levels to users for multiple access. The uplink (UL) communication in the present 4G-Networks is performed by the base station (BS) that uses a request-grant mechanism in which a large-overhead and latency is produced. This issue will get more severe in upcoming 5G-Networks. For this purpose, a grant-free NOMA for UL communication, in which dynamic compressed-sensing (DCS) algorithm will perform multi-user detection (MUD) as well as data-detection is proposed. It deploys the temporal-correlation of active-user sets (AUS) in adjacent time-slots from which the estimated AUS is used as the prior-knowledge to estimate AUS in the next time-slot. For the downlink (DL) communication, the proposed system performance evaluation is performed using Rician fading-channels for Cooperative Relaying System (CRS) NOMA. The simulations results show that the proposed DCS-MUD and CRS NOMA over Rician fading-channels perform much better than the conventional CS-MUD and traditional-CRS

    PERFORMANCE ANALYSIS IN WIRELESS POWERED D2D- AIDED NON-ORTHOGONAL MULTIPLE ACCESS NETWORKS

    Get PDF
    This paper examine how to integrate energy harvesting (EH) to non-orthogonal multiple access (NOMA) networks. Recently, device-to-device (D2D) underlaying licensed network is introduced as novel transmission mode to perform two nearby user equipment units (UEs) communicating directly without signal processing through the nearest base station (BS). By wireless power transfer, they can be further operational to D2D communications in which a UE may harvest energy from RF signal of dedicated power beacons (PB) to help EH assisted UEs communicate with each other or assist these UEs to communicate with the BS. In particular, we investigate outage and throughput performance in a scenario of D2D communications powered by RF signal where one UE may help other two UEs to exchange information with optimal throughput

    Offloading Decisions in a Mobile Edge Computing Node with Time and Energy Constraints

    Get PDF
    This article describes a simulated annealing based offloading decision with processing time, energy consumption and resource constraints in a Mobile Edge Computing Node. Edge computing mostly deals with mobile devices subject to constraints. Especially because of their limited processing capacity and the availability of their battery, these devices have to offload some of their heavy tasks, which require a lot of calculations. We consider a single mobile device with a list of heavy tasks that can be offloadable. The formulated optimization problem takes into account both the dedicated energy capacity and the total execution time. We proposed a heuristic solution schema. To evaluate our solution, we performed a set of simulation experiments. The results obtained in terms of processing time and energy consumption are very encouraging

    Performance Analysis of Mobility Impact on IEEE 802.11ah Standard with Traffic Pattern Scheme

    Get PDF
    Internet of Things (IOT) offers a new dimension of technology and information where connectivity is available anywhere, anytime, and for any purpose. IEEE 802.11 Wireless Local Area Network group is a standard that developed to answer the needs of wireless communication technology (WI-Fi). Recently, IEEE 802.11 working group released the 802.11ah technology or Wi-Fi HaLow as a Wi-fi standard. This standard works on the 1 GHz frequency band with a broader coverage area, massive device and the energy efficiency issues. This research addresses, the influence of Random Walk, Gauss-Markov, and Random Waypoint mobility model on 802.11ah with different traffic pattern scheme are analyzed. The design of the simulation system is done by changing of node density. Based on the result, it can be concluded that the overall performance of the network with all of the parameter scenarios is decreasing along with increasing the Stations. In the node density scenario, the Random Waypoint mobility model has the best performance with an average delay is about 0.65805 s, throughput is about 0.53811Mbps, PDR is about 96.75%, and energy consumption is about 5.2530 Joule

    Performance Analysis of User Speed Impact on IEEE 802.11ah Standard affected by Doppler Effect

    Get PDF
    Internet of Things (IOT) offers a new dimension of technology and information where connectivity is available anywhere, anytime, and for any purpose. IEEE 802.11 Wireless Local Area Network group is a standard that developed to answer the needs of wireless communication technology (WI-Fi). Recently, IEEE 802.11 working group released the 802.11ah technology or Wi-Fi HaLow as a Wi-fi standard. This standard works on the 1 GHz frequency band with a broader coverage area, massive device and the energy efficiency issues. This research addresses, the influence of Doppler Effect using Random Waypoint mobility model on 802.11ah with different user speed are analyzed. The design of the simulation system is done by changing user speed and MCS. Based on the result, it can be concluded that the overall performance of the network with all of the parameter scenarios is decreasing along with the increasing user speed, RAW group, and bandwidth. In the user speed scenario, the MCS 5 with RAW group = 2 and bandwidth = 2 MHz in v = 10 km/h scenario has the worst performance with an average delay which is about 0.065463 s, throughput is about 0.328120 Mbps, and PDR is about 99.8901%. Keywords: Restricted Access Window (RAW), IEEE 802.11ah, Random Waypoint, Modulation and Coding Scheme (MCS), Network Simulator 3

    An Energy-Efficient Scheme for IoT Networks

    Get PDF
    With the advent of the Internet of Things era, "things-things interconnection" has become a new concept, that is, through the informatization and networking of the physical world, the traditionally separated physical world and the information world are interconnected and integrated. Different from the concept of connecting people in the information world in the Internet, the Internet of Things extends its tentacles to all aspects of the physical world. The proposed algorithm considers the periodical uplink data transmission in IEEE 802.11ah LWPAN and a real-time raw settings method is used. The uplink channel resources were divided into Beacon periods after the multiple nodes send data to the access point. First, the access point predicted the next data uploading time during the Beacon period. In the next Beacon period, the total number of devices that will upload data is predicted. Then, the optimal read-and-write parameters were calculated for minimum energy cost and broadcasted such information to all nodes. After this, the data is uploaded according the read-and-write scheduling by all the devices. Simulation results show that the proposed algorithm effectively improved the network state prediction accuracy and dynamically adjusted the configuration parameters which results in improved network energy efficiency in the IoT environment

    Smart Relay Selection Scheme Based on Fuzzy Logic with Optimal Power Allocation and Adaptive Data Rate Assignment

    Get PDF
    In this paper fuzzy logic-based algorithm with improved process of relay selection is presented which not only allocate optimal power for transmission but also help in choosing adaptive data rate. This algorithm utilizes channel gain, cooperative gain and signal to noise ratio with two cases considered in this paper: In case-I nodes do not have their geographical location information while in case-II nodes are having their geographical location information. From Monte Carlo simulations, it can be observed that both cases improve the selection process along with data rate assignment and power allocation, but case-II is the most reliable with almost zero probability of error at the cost of computational complexity which is 10 times more than case-I
    corecore