1,064 research outputs found
EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design
The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application
Enabling Micro-level Demand-Side Grid Flexiblity in Resource Constrained Environments
The increased penetration of uncertain and variable renewable energy presents
various resource and operational electric grid challenges. Micro-level
(household and small commercial) demand-side grid flexibility could be a
cost-effective strategy to integrate high penetrations of wind and solar
energy, but literature and field deployments exploring the necessary
information and communication technologies (ICTs) are scant. This paper
presents an exploratory framework for enabling information driven grid
flexibility through the Internet of Things (IoT), and a proof-of-concept
wireless sensor gateway (FlexBox) to collect the necessary parameters for
adequately monitoring and actuating the micro-level demand-side. In the summer
of 2015, thirty sensor gateways were deployed in the city of Managua
(Nicaragua) to develop a baseline for a near future small-scale demand response
pilot implementation. FlexBox field data has begun shedding light on
relationships between ambient temperature and load energy consumption, load and
building envelope energy efficiency challenges, latency communication network
challenges, and opportunities to engage existing demand-side user behavioral
patterns. Information driven grid flexibility strategies present great
opportunity to develop new technologies, system architectures, and
implementation approaches that can easily scale across regions, incomes, and
levels of development
Async-HFL: Efficient and Robust Asynchronous Federated Learning in Hierarchical IoT Networks
Federated Learning (FL) has gained increasing interest in recent years as a
distributed on-device learning paradigm. However, multiple challenges remain to
be addressed for deploying FL in real-world Internet-of-Things (IoT) networks
with hierarchies. Although existing works have proposed various approaches to
account data heterogeneity, system heterogeneity, unexpected stragglers and
scalibility, none of them provides a systematic solution to address all of the
challenges in a hierarchical and unreliable IoT network. In this paper, we
propose an asynchronous and hierarchical framework (Async-HFL) for performing
FL in a common three-tier IoT network architecture. In response to the largely
varied delays, Async-HFL employs asynchronous aggregations at both the gateway
and the cloud levels thus avoids long waiting time. To fully unleash the
potential of Async-HFL in converging speed under system heterogeneities and
stragglers, we design device selection at the gateway level and device-gateway
association at the cloud level. Device selection chooses edge devices to
trigger local training in real-time while device-gateway association determines
the network topology periodically after several cloud epochs, both satisfying
bandwidth limitation. We evaluate Async-HFL's convergence speedup using
large-scale simulations based on ns-3 and a network topology from NYCMesh. Our
results show that Async-HFL converges 1.08-1.31x faster in wall-clock time and
saves up to 21.6% total communication cost compared to state-of-the-art
asynchronous FL algorithms (with client selection). We further validate
Async-HFL on a physical deployment and observe robust convergence under
unexpected stragglers.Comment: Accepted by IoTDI'2
Management And Security Of Multi-Cloud Applications
Single cloud management platform technology has reached maturity and is quite successful in information technology applications. Enterprises and application service providers are increasingly adopting a multi-cloud strategy to reduce the risk of cloud service provider lock-in and cloud blackouts and, at the same time, get the benefits like competitive pricing, the flexibility of resource provisioning and better points of presence. Another class of applications that are getting cloud service providers increasingly interested in is the carriers\u27 virtualized network services. However, virtualized carrier services require high levels of availability and performance and impose stringent requirements on cloud services. They necessitate the use of multi-cloud management and innovative techniques for placement and performance management. We consider two classes of distributed applications – the virtual network services and the next generation of healthcare – that would benefit immensely from deployment over multiple clouds. This thesis deals with the design and development of new processes and algorithms to enable these classes of applications. We have evolved a method for optimization of multi-cloud platforms that will pave the way for obtaining optimized placement for both classes of services. The approach that we have followed for placement itself is predictive cost optimized latency controlled virtual resource placement for both types of applications. To improve the availability of virtual network services, we have made innovative use of the machine and deep learning for developing a framework for fault detection and localization. Finally, to secure patient data flowing through the wide expanse of sensors, cloud hierarchy, virtualized network, and visualization domain, we have evolved hierarchical autoencoder models for data in motion between the IoT domain and the multi-cloud domain and within the multi-cloud hierarchy
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained Machine-Type Communication
(MTC) devices is leading to the critical challenge of fulfilling diverse
communication requirements in dynamic and ultra-dense wireless environments.
Among different application scenarios that the upcoming 5G and beyond cellular
networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the
unique technical challenge of supporting a huge number of MTC devices, which is
the main focus of this paper. The related challenges include QoS provisioning,
handling highly dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this paper aims to
identify and analyze the involved technical issues, to review recent advances,
to highlight potential solutions and to propose new research directions. First,
starting with an overview of mMTC features and QoS provisioning issues, we
present the key enablers for mMTC in cellular networks. Along with the
highlights on the inefficiency of the legacy Random Access (RA) procedure in
the mMTC scenario, we then present the key features and channel access
mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT.
Subsequently, we present a framework for the performance analysis of
transmission scheduling with the QoS support along with the issues involved in
short data packet transmission. Next, we provide a detailed overview of the
existing and emerging solutions towards addressing RAN congestion problem, and
then identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in ultra-dense
cellular networks. Out of several ML techniques, we focus on the application of
low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss
some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future
publication in IEEE Communications Surveys and Tutorial
- …