207 research outputs found
Power Saving Techniques in 5G Technology for Multiple-Beam Communications
The evolution of mobile technology and computation systems enables User Equipment (UE) to manage tremendous amounts of data transmission. As a result of current 5G technology, several types of wireless traffic in millimeter wave bands can be transmitted at high data rates with ultra-reliable and small latency communications. The 5G networks rely on directional beamforming and mmWave uses to overcome propagation and losses during penetration. To align the best beam pairs and achieve high data rates, beam-search operations are used in 5G. This combined with multibeam reception and high-order modulation techniques deteriorates the battery power of the UE. In the previous 4G radio mobile system, Discontinuous Reception (DRX) techniques were successfully used to save energy. To reduce the energy consumption and latency of multiple-beam 5G radio communications, we will propose in this paper the DRX Beam Measurement technique (DRX-BM). Based on the power-saving factor analysis and the delayed response, we will model DRX-BM into a semi-Markov process to reduce the tracking time. Simulations in MATLAB are used to assess the effectiveness of the proposed model and avoid unnecessary time spent on beam search. Furthermore, the simulation indicates that our proposed technique makes an improvement and saves 14% on energy with a minimum delay
Energy Efficiency in Communications and Networks
The topic of "Energy Efficiency in Communications and Networks" attracts growing attention due to economical and environmental reasons. The amount of power consumed by information and communication technologies (ICT) is rapidly increasing, as well as the energy bill of service providers. According to a number of studies, ICT alone is responsible for a percentage which varies from 2% to 10% of the world power consumption. Thus, driving rising cost and sustainability concerns about the energy footprint of the IT infrastructure. Energy-efficiency is an aspect that until recently was only considered for battery driven devices. Today we see energy-efficiency becoming a pervasive issue that will need to be considered in all technology areas from device technology to systems management. This book is seeking to provide a compilation of novel research contributions on hardware design, architectures, protocols and algorithms that will improve the energy efficiency of communication devices and networks and lead to a more energy proportional technology infrastructure
Analysis of Energy Efficiency in IEEE 802.11ah
Recently, machine to machine (M2M) communication has been considerably evolved and occupied a large proportion of the wireless markets. The distinct feature of M2M applications brings new challenges to the design of the wireless systems. In order to increase the competence for M2M markets, several enhancements have been proposed accordingly in different wireless technologies. The thesis introduces these M2M enhancements with a focus on the Wi-Fi solution - 802.11ah technology.
802.11ah is a new amendment of Wi-Fi technology for M2M applications. In 802.11ah, a new mechanism named TIM segmentation has been introduced to provide scalable operation for a large number of devices as well as reduce the energy consumption. The scope of the thesis is to evaluate the energy efficiency of TIM segmentation in uplink traffic assuming Poisson process. To thoughtfully understand the principle of this mechanism, the fundamental MAC layer functions in Wi-Fi technologies have also been introduced. In addition, the thesis also proposed an energy-saving solution called additional sleeping (AS) cycles.
The performance evaluation is based on a Matlab system-level simulator. The simulations are carried out for various TIM segmentation deployments for a selected M2M use case, the agriculture scenario. The results show that the TIM segmentation can deteriorate the performance for uplink transmission. This is because that in sporadic traffic, restricting the uplink access causes the increase in packet buffering and these packets leads to simultaneous transmission. This can be a serious issue especially for the network with a large number of devices. The random backoff procedure in Wi-Fi cannot efficiently solve this collision problem. In addition, results shows that the AS cycles can reduce the energy consumption in busy-channel sensing and also decrease the collision probability by adding extra randomness
Dissecting Energy Consumption of NB-IoT Devices Empirically
3GPP has recently introduced NB-IoT, a new mobile communication standard
offering a robust and energy efficient connectivity option to the rapidly
expanding market of Internet of Things (IoT) devices. To unleash its full
potential, end-devices are expected to work in a plug and play fashion, with
zero or minimal parameters configuration, still exhibiting excellent energy
efficiency. We perform the most comprehensive set of empirical measurements
with commercial IoT devices and different operators to date, quantifying the
impact of several parameters to energy consumption. Our campaign proves that
parameters setting does impact energy consumption, so proper configuration is
necessary. We shed light on this aspect by first illustrating how the nominal
standard operational modes map into real current consumption patterns of NB-IoT
devices. Further, we investigate which device reported metadata metrics better
reflect performance and implement an algorithm to automatically identify device
state in current time series logs. Then, we provide a measurement-driven
analysis of the energy consumption and network performance of two popular
NB-IoT boards under different parameter configurations and with two major
western European operators. We observed that energy consumption is mostly
affected by the paging interval in Connected state, set by the base station.
However, not all operators correctly implement such settings. Furthermore,
under the default configuration, energy consumption in not strongly affected by
packet size nor by signal quality, unless it is extremely bad. Our observations
indicate that simple modifications to the default parameters settings can yield
great energy savings.Comment: 18 pages, 25 figures, IEEE journal format, all Figures recreated for
better readability, new section with results summar
Recommended from our members
System optimisation and radio planning for future LTE-advanced
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThis work is related to wireless communication. In this Thesis three main issues are addressed for future cellular networks: power consumption, interference and mobility. These issues continue to be a burden on the systemâs performance as long as technology keeps evolving. In the presented chapters, the focus was to introduce greater intelligence to the LTE system algorithms and bring to them a dynamic and self-organizing approach. The first approach concerns power consumption in wireless terminals. The currently applied solution to save energy is the DRX mechanism. It organizes the time when the terminal wakes up and starts receiving data, and when it goes into sleep mode in order to save its battery power. The current DRX is described as static or fixed which makes its parameters unsuitable for the nature of the bursty traffic. In this work an adaptive DRX mechanism is proposed and evaluated as the wireless terminal battery saving algorithm. The second approach is co-channel interference mitigation. To increase the systemâs capacity and avoid spectrum scarcity, small cells such as Femtocells are deployed and operate on the same frequency bands as the Macrocell. Although these small nodes increase the system capacity, however, the challenges will be in the femtocells planning and management in addition to the interference issues. Here a dynamic interference cancellation approach is presented to enable the Femtocell to track the allocated resources to the Macro-users, and to avoid using them. The third approach concerns mobility management in heterogeneous networks. The wireless terminal may have different mobility levels during handover which increases the handover failures due to failure in handover commands and aging of the reported parameters. This issue is presented in detail with the aim to avoid performance degradation and improve the reporting mechanisms during fast mobility levels. For this regard the presented method proposes more cooperation between the serving cell and the end-user so that the large amount of overhead and measurement are reduced. Simulations with different configurations are conducted to present the results of the proposed models. Results show that the proposed models bring improvements to the LTE system. The enhanced self-organized architecture in the three presented approaches performs well in terms of power saving, dynamic spectrum utilization by Femtocells, and mitigation of sudden throughput degradation due to the serving cellâs downlink signal outage during mobility.Brunel University Londo
Quantifying Potential Energy Efficiency Gain in Green Cellular Wireless Networks
Conventional cellular wireless networks were designed with the purpose of
providing high throughput for the user and high capacity for the service
provider, without any provisions of energy efficiency. As a result, these
networks have an enormous Carbon footprint. In this paper, we describe the
sources of the inefficiencies in such networks. First we present results of the
studies on how much Carbon footprint such networks generate. We also discuss
how much more mobile traffic is expected to increase so that this Carbon
footprint will even increase tremendously more. We then discuss specific
sources of inefficiency and potential sources of improvement at the physical
layer as well as at higher layers of the communication protocol hierarchy. In
particular, considering that most of the energy inefficiency in cellular
wireless networks is at the base stations, we discuss multi-tier networks and
point to the potential of exploiting mobility patterns in order to use base
station energy judiciously. We then investigate potential methods to reduce
this inefficiency and quantify their individual contributions. By a
consideration of the combination of all potential gains, we conclude that an
improvement in energy consumption in cellular wireless networks by two orders
of magnitude, or even more, is possible.Comment: arXiv admin note: text overlap with arXiv:1210.843
Towards efficient support for massive Internet of Things over cellular networks
The usage of Internet of Things (IoT) devices over cellular networks is seeing tremendous
growth in recent years, and that growth in only expected to increase in the near
future. While existing 4G and 5G cellular networks offer several desirable features for
this type of applications, their design has historically focused on accommodating traditional
mobile devices (e.g. smartphones). As IoT devices have very different characteristics
and use cases, they create a range of problems to current networks which often
struggle to accommodate them at scale. Although newer cellular network technologies,
such as Narrowband-IoT (NB-IoT), were designed to focus on the IoT characteristics,
they were extensively based on 4G and 5G networks to preserve interoperability, and
decrease their deployment cost. As such, several inefficiencies of 4G/5G were also
carried over to the newer technologies.
This thesis focuses on identifying the core issues that hinder the large scale deployment
of IoT over cellular networks, and proposes novel protocols to largely alleviate
them. We find that the most significant challenges arise mainly in three distinct areas:
connection establishment, network resource utilisation and device energy efficiency.
Specifically, we make the following contributions. First, we focus on the connection
establishment process and argue that the current procedures, when used by IoT devices,
result in increased numbers of collisions, network outages and a signalling overhead
that is disproportionate to the size of the data transmitted, and the connection duration
of IoT devices. Therefore, we propose two mechanisms to alleviate these inefficiencies.
Our first mechanism, named ASPIS, focuses on both the number of collisions
and the signalling overhead simultaneously, and provides enhancements to increase the
number of successful IoT connections, without disrupting existing background traffic.
Our second mechanism focuses specifically on the collisions at the connection establishment
process, and used a novel approach with Reinforcement Learning, to decrease
their number and allow a larger number of IoT devices to access the network with fewer
attempts.
Second, we propose a new multicasting mechanism to reduce network resource
utilisation in NB-IoT networks, by delivering common content (e.g. firmware updates)
to multiple similar devices simultaneously. Notably, our mechanism is both more efficient
during multicast data transmission, but also frees up resources that would otherwise
be perpetually reserved for multicast signalling under the existing scheme.
Finally, we focus on energy efficiency and propose novel protocols that are designed
for the unique usage characteristics of NB-IoT devices, in order to reduce the
device power consumption. Towards this end, we perform a detailed energy consumption
analysis, which we use as a basis to develop an energy consumption model for
realistic energy consumption assessment. We then take the insights from our analysis,
and propose optimisations to significantly reduce the energy consumption of IoT
devices, and assess their performance
- âŠ