23 research outputs found
Robust Schemes to Enhance Energy Consumption Efficiency for Millimeter Wave-Based Microcellular Network in Congested Urban Environments
Future wireless communication networks will be largely characterized by small cell deployments, typically on the order of 200 meters of radius/cell, at most. Meanwhile, recent studies show that base stations (BS) account for about 80 to 95 % of the total network power. This simply implies that more energy will be consumed in the future wireless network since small cell means massive deployment of BS. This phenomenon makes energy-efficient (EE) control a central issue of critical consideration in the design of future wireless networks. This paper proposes and investigates (the performance of) two different energy-saving approaches namely, adaptive-sleep sectorization (AS), adaptive hybrid partitioning schemes (AH) for small cellular networks using smart antenna technique. We formulated a generic base-model for the above-mentioned schemes and applied the spatial Poisson process to reduce the system complexity and to improve flexibility in the beam angle reconfiguration of the adaptive antenna, also known as a smart antenna (SA). The SA uses the scalable algorithms to track active users in different segments/sectors of the microcell, making the proposed schemes capable of targeting specific users or groups of users in periods of sparse traffic, and capable of performing optimally when the network is highly congested. The capabilities of the proposed smart/adaptive antenna approaches can be easily adapted and integrated into the massive MIMO for future deployment. Rigorous numerical analysis at different orders of sectorization shows that among the proposed schemes, the AH strategy outperforms the AS in terms of energy saving by about 52 %. Generally, the proposed schemes have demonstrated the ability to significantly increase the power consumption efficiency of micro base stations for future generation cellular systems, over the traditional design methodologies
Robust Schemes to Enhance Energy Consumption Efficiency for Millimeter Wave-Based Microcellular Network in Congested Urban Environments
Future wireless communication networks will be largely characterized by small cell deployments, typically on the order of 200 meters of radius/cell, at most. Meanwhile, recent studies show that base stations (BS) account for about 80 to 95 % of the total network power. This simply implies that more energy will be consumed in the future wireless network since small cell means massive deployment of BS. This phenomenon makes energy-efficient (EE) control a central issue of critical consideration in the design of future wireless networks. This paper proposes and investigates (the performance of) two different energy-saving approaches namely, adaptive-sleep sectorization (AS), adaptive hybrid partitioning schemes (AH) for small cellular networks using smart antenna technique. We formulated a generic base-model for the above-mentioned schemes and applied the spatial Poisson process to reduce the system complexity and to improve flexibility in the beam angle reconfiguration of the adaptive antenna, also known as a smart antenna (SA). The SA uses the scalable algorithms to track active users in different segments/sectors of the microcell, making the proposed schemes capable of targeting specific users or groups of users in periods of sparse traffic, and capable of performing optimally when the network is highly congested. The capabilities of the proposed smart/adaptive antenna approaches can be easily adapted and integrated into the massive MIMO for future deployment. Rigorous numerical analysis at different orders of sectorization shows that among the proposed schemes, the AH strategy outperforms the AS in terms of energy saving by about 52 %. Generally, the proposed schemes have demonstrated the ability to significantly increase the power consumption efficiency of micro base stations for future generation cellular systems, over the traditional design methodologies
Propagation characterization and analysis for 5G mmWave through field experiments
The 5G network has been intensively investigated to realize the ongoing early deployment stage as an effort to match the exponential growth of the number of connected users and their increasing demands for high throughput, bandwidth with Quality of Service (QoS), and low latency. Given that most of the spectrums below 6 GHz are nearly used up, it is not feasible to employ the traditional spectrum, which is currently in use. Therefore, a promising and highly feasible effort to satisfy this insufficient frequency spectrum is to acquire new frequency bands for next-generation mobile communications. Toward this end, the primary effort has been focused on utilizing the millimeter-wave(mmWave) as the most promising candidate for the frequency spectrum. However, though the mmWave frequency band can fulfill the desired bandwidth requirements, it has been demonstrated to endure several issues like scattering, atmospheric absorption, fading, and especially penetration losses compared to the existing sub-6 GHz frequency band. Then, it is fundamental to optimize the mmWave band propagation channel to facilitate the practical 5G implementation for the network operators. Therefore, this study intends to investigate the outdoor channel characteristics of 26, 28, 36, and 38 GHz frequency bands for the communication infrastructure at the building to the ground floor in both Line of Sight (LOS) and Non-Line of Sight (NLOS) environments. The experimental campaign has studied the propagation path loss models such as Floating-Intercept (FI) and Close-In (CI) for the building to ground floor environment in LOS and NLOS scenarios. The findings obtained from the field experiments clearly show that the CI propagation model delivers much better performance in comparison with the FI model, thanks to its simple setup, accuracy, and precise function
MBMQA: A Multicriteria-Aware Routing Approach for the IoT 5G Network Based on D2D Communication
With the rapid development of future wireless networks, device-to-device (D2D) technology is widely used as the communication system in the Internet of Things (IoT) fifth generation (5G) network. The IoT 5G network based on D2D communication technology provides pervasive intelligent applications. However, to realize this reliable technology, several issues need to be critically addressed. Firstly, the device’s energy is constrained during its vital operations due to limited battery power; thereby, the connectivity will suffer from link failures when the device’s energy is exhausted. Similarly, the device’s mobility alters the network topology in an arbitrary manner, which affects the stability of established routes. Meanwhile, traffic congestion occurs in the network due to the backlog packet in the queue of devices. This paper presents a Mobility, Battery, and Queue length Multipath-Aware (MBMQA) routing scheme for the IoT 5G network based on D2D communication to cope with these key challenges. The back-pressure algorithm strategy is employed to divert packet flow and illuminate the device selection’s estimated value. Furthermore, a Multiple-Attributes Route Selection (MARS) metric is applied for the optimal route selection with load balancing in the D2D-based IoT 5G network. Overall, the obtained simulation results demonstrate that the proposed MBMQA routing scheme significantly improves the network performance and quality of service (QoS) as compared with the other existing routing schemes
Optimizing the number of fog nodes for finite fog radio access networks under multi-slope path loss model
Fog Radio Access Network (F-RAN) is a promising technology to address the bandwidth bottlenecks and network latency problems, by providing cloud-like services to the end nodes (ENs) at the edge of the network. The network latency can further be decreased by minimizing the transmission delay, which can be achieved by optimizing the number of Fog Nodes (FNs). In this context, we propose a stochastic geometry model to optimize the number of FNs in a finite F-RAN by exploiting the multi-slope path loss model (MS-PLM), which can more precisely characterize the path loss dependency on the propagation environment. The proposed approach shows that the optimum probability of being a FN is determined by the real root of a polynomial equation of a degree determined by the far-field path loss exponent (PLE) of the MS-PLM. The results analyze the impact of the path loss parameters and the number of deployed nodes on the optimum number of FNs. The results show that the optimum number of FNs is less than 7% of the total number of deployed nodes for all the considered scenarios. It also shows that optimizing the number of FNs achieves a significant reduction in the average transmission delay over the unoptimized scenarios
Enhancing the QoS performance for mobile station over LTE and WiMAX networks / Mhd Nour Hindia
Nowadays, one of the most important challenges in heterogeneous networks is the connection consistency between the mobile stations and base stations while maintaining a high degree of quality of service. Furthermore, along the roaming process between the mobile station and the base station, the system performance degrades significantly due to interferences from neighboring base stations, handovers to inaccurate base station, inappropriate technology selections and insufficient scheduling schemes.
In this study, several algorithms are proposed to guarantee high efficiency of resource allocations among a variety of smart grid applications (flat priority based and specific priority based scheduling algorithms), such as distribution automation, voice, video surveillance and advanced metering infrastructure. These algorithms can improve mobile stations’ quality of service as well as guarantee seamless mobility across the Long-Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX) technologies.
Firstly, to solve the handover issues (prediction target base station, and selection best target technology), the enhanced global positioning system (GPS) and the novel received signal strength (RSS) prediction algorithms are utilized to accurately predict the target base station. Then, a multi criteria with two-threshold algorithm is proposed to prioritize the selection between the LTE and WiMAX as target technologies. In addition, this thesis also covers the inter-cell and co-channel interference reduction by adjusting the frequency reuse ratio 3 (FRR3) to work with LTE and WiMAX. Moreover, in terms of flat priority based scheduling issues, this thesis proposes a two level scheduling scheme composed of cooperative game theory (bankruptcy and shapely) and Technique for Order Performance by Similarity to Ideal Solution method (TOPSIS). On the first level, bankruptcy and shapely value algorithm fairly distribute resources among smart grid applications. On the second level, TOPSIS algorithm allocates resources among an application’s user based on their criteria and the application’s preferences. The proposed algorithms provide an effective scheduling technique for smart grid applications. Whereas for specific priority based scheduling issues, the bandwidth estimation and allocation along with multi-attributes decision making are proposed.
The results obtained show that in terms of handover issues, a modified multi criteria with two threshold algorithm increases the efficiency of target network selection. The selection is based on the user preferences since it uses a self-learning algorithm to determine triggers and handover thresholds dynamically. In addition, by adding the FRR3 technique to the system, the efficiency of prediction of target base station and selection of target technology are increased, and the delay is decreased by approximately 15%. Similarly, in terms of flat priority based scheduling issues, the proposed algorithm also is found to be of higher performance up to 42 users for video applications and 60 users for the metering data applications. Finally, the novel algorithm shows the lowest delay and the highest fairness of approximately 0.98 whereas for specific priority based scheduling issues, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for real time applications as well as provides a degree of service for non-real time applications. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule, modified-largest weighted delay first and exponential/proportional fairness, respectively
An overview of internet of things (IoT) and data analytics in agriculture: benefits and challenges
The surge in global population is compelling a shift toward smart agriculture practices. This coupled with the diminishing natural resources, limited availability of arable land, increase in unpredictable weather conditions makes food security a major concern for most countries. As a result, the use of Internet of Things (IoT) and data analytics (DA) are employed to enhance the operational efficiency and productivity in the agriculture sector. There is a paradigm shift from use of wireless sensor network (WSN) as a major driver of smart agriculture to the use of IoT and DA. The IoT integrates several existing technologies, such as WSN, radio frequency identification, cloud computing, middleware systems, and end-user applications. In this paper, several benefits and challenges of IoT have been identified. We present the IoT ecosystem and how the combination of IoT and DA is enabling smart agriculture. Furthermore, we provide future trends and opportunities which are categorized into technological innovations, application scenarios, business, and marketability
Investigation of QoS performance evaluation over 5G network for indoor environment at millimeter wave bands
One of the crucial advancements in next-generation 5G wireless networks is the use of high-frequency signals specifically those are in the millimeter wave (mm-wave) bands. Using mm-wave frequency will allow more bandwidth resulting higher user data rates in comparison to the currently available network. However, several challenges are emerging (such as fading, scattering, propagation loss etc.), whenever we utilize mm-wave frequency wave bands for signal propagation. Optimizing propagation parameters of the mm-wave channels system are much essential for implementing in the real-world scenario. To keep this in mind, this paper presents the potential abilities of high frequencies signals by characterizing the indoor small cell propagation channel for 28, 38, 60 and 73 GHz frequency band, which is considered as the ultimate frequency choice for many of the researchers. The most potential Close-In (CI) propagation model for mm-wave frequencies is used as a Large-scale path loss model. Results and outcomes directly affecting the user experience based on fairness index, average cell throughput, spectral efficiency, cell-edge user’s throughput and average user throughput. The statistical results proved that these mm-wave spectrum gives a sufficiently greater overall performance and are available for use in the next generation 5G mobile communication network. © 2019 Polish Academy of Sciences. All Rights Reserved
A Stochastic Geometry Approach to Full-Duplex MIMO Relay Network
Cellular networks are extensively modeled by placing the base stations on a grid, with relays and destinations being placed deterministically. These networks are idealized for not considering the interferences when evaluating the coverage/outage and capacity. Realistic models that can overcome such limitation are desirable. Specifically, in a cellular downlink environment, the full-duplex (FD) relaying and destination are prone to interferences from unintended sources and relays. However, this paper considered two-hop cellular network in which the mobile nodes aid the sources by relaying the signal to the dead zone. Further, we model the locations of the sources, relays, and destination nodes as a point process on the plane and analyze the performance of two different hops in the downlink. Then, we obtain the success probability and the ergodic capacity of the two-hop MIMO relay scheme, accounting for the interference from all other adjacent cells. We deploy stochastic geometry and point process theory to rigorously analyze the two-hop scheme with/without interference cancellation. These attained expressions are amenable to numerical evaluation and are corroborated by simulation results
A Stochastically Geometrical Poisson Point Process Approach for the Future 5G D2D Enabled Cooperative Cellular Network
Due to recent developments in the cellular communication system, stochastic process implementation is necessary. The cellular communication system exhibits random patterns in various domains, thereby compelling the utilization of stochastic processes to achieve an optimal output. User behaviors with respect to the variable geographical pattern, population density, architecture, data usage, and mobility over various cells are random in nature. Therefore, the stochastic-geometry-based Poisson point process (PPP) technique can be implemented to accurately analyze these random processes in device-to-device (D2D)-based cooperative cellular networks. The stochastic modeling entails the consideration of transmitters and receivers as the elements of stochastic point processes. The hexagonal method is not applicable for the implementation of heterogeneous network topologies, as it is not suitable for topologies, in which the cell size is not fixed. Therefore, a randomly designed heterogeneous network uses stochastic geometry as a viable solution for predicting the probabilistic parameters, including the cell interference, load distribution, coverage probability, base station (BS) mapping, and signal-to-interference-plus-noise ratio (SINR). Moreover, as a network architecture that is based on relay nodes (RNs), cellular and D2D users can be utilized in the domain of homogeneous random models. The associated phenomenon can be considered independent and Poisson. In this paper, the stochastic-geometric-based PPP approach is introduced for modeling the SINR, success probability, ergodic capacity, and outage probability for the D2D-enabled cooperative cellular network. The proposed PPP realistic model utilizes BS, RN, the cellular user (CU), and D2D user positioning method to design an interference-free network. The success probability, ergodic capacity, and outage probability for cellular and D2D users are used as metrics for evaluating the results with respect to various SINR threshold values and node densities. Moreover, the total success probability, ergodic capacity, and outage probability have been calculated for various multiple-input-multiple-output (MIMO) antenna configurations to validate the results. The results confirmed that the proposed PPP model approach outperforms the grid model and conventional multi-antenna ultra-dense network (UDN) approaches. © 2013 IEEE