3,737 research outputs found
Analyzing Energy-efficiency and Route-selection of Multi-level Hierarchal Routing Protocols in WSNs
The advent and development in the field of Wireless Sensor Networks (WSNs) in
recent years has seen the growth of extremely small and low-cost sensors that
possess sensing, signal processing and wireless communication capabilities.
These sensors can be expended at a much lower cost and are capable of detecting
conditions such as temperature, sound, security or any other system. A good
protocol design should be able to scale well both in energy heterogeneous and
homogeneous environment, meet the demands of different application scenarios
and guarantee reliability. On this basis, we have compared six different
protocols of different scenarios which are presenting their own schemes of
energy minimizing, clustering and route selection in order to have more
effective communication. This research is motivated to have an insight that
which of the under consideration protocols suit well in which application and
can be a guide-line for the design of a more robust and efficient protocol.
MATLAB simulations are performed to analyze and compare the performance of
LEACH, multi-level hierarchal LEACH and multihop LEACH.Comment: NGWMN with 7th IEEE Inter- national Conference on Broadband and
Wireless Computing, Communication and Applications (BWCCA 2012), Victoria,
Canada, 201
Green Cellular Networks: A Survey, Some Research Issues and Challenges
Energy efficiency in cellular networks is a growing concern for cellular
operators to not only maintain profitability, but also to reduce the overall
environment effects. This emerging trend of achieving energy efficiency in
cellular networks is motivating the standardization authorities and network
operators to continuously explore future technologies in order to bring
improvements in the entire network infrastructure. In this article, we present
a brief survey of methods to improve the power efficiency of cellular networks,
explore some research issues and challenges and suggest some techniques to
enable an energy efficient or "green" cellular network. Since base stations
consume a maximum portion of the total energy used in a cellular system, we
will first provide a comprehensive survey on techniques to obtain energy
savings in base stations. Next, we discuss how heterogeneous network deployment
based on micro, pico and femto-cells can be used to achieve this goal. Since
cognitive radio and cooperative relaying are undisputed future technologies in
this regard, we propose a research vision to make these technologies more
energy efficient. Lastly, we explore some broader perspectives in realizing a
"green" cellular network technologyComment: 16 pages, 5 figures, 2 table
Energy-delay bounds analysis in wireless multi-hop networks with unreliable radio links
Energy efficiency and transmission delay are very important parameters for
wireless multi-hop networks. Previous works that study energy efficiency and
delay are based on the assumption of reliable links. However, the unreliability
of the channel is inevitable in wireless multi-hop networks. This paper
investigates the trade-off between the energy consumption and the end-to-end
delay of multi-hop communications in a wireless network using an unreliable
link model. It provides a closed form expression of the lower bound on the
energy-delay trade-off for different channel models (AWGN, Raleigh flat fading
and Nakagami block-fading) in a linear network. These analytical results are
also verified in 2-dimensional Poisson networks using simulations. The main
contribution of this work is the use of a probabilistic link model to define
the energy efficiency of the system and capture the energy-delay trade-offs.
Hence, it provides a more realistic lower bound on both the energy efficiency
and the energy-delay trade-off since it does not restrict the study to the set
of perfect links as proposed in earlier works
EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design
The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- …