1,134 research outputs found
Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability
Internet-of-Things (IoT) envisions an intelligent infrastructure of networked
smart devices offering task-specific monitoring and control services. The
unique features of IoT include extreme heterogeneity, massive number of
devices, and unpredictable dynamics partially due to human interaction. These
call for foundational innovations in network design and management. Ideally, it
should allow efficient adaptation to changing environments, and low-cost
implementation scalable to massive number of devices, subject to stringent
latency constraints. To this end, the overarching goal of this paper is to
outline a unified framework for online learning and management policies in IoT
through joint advances in communication, networking, learning, and
optimization. From the network architecture vantage point, the unified
framework leverages a promising fog architecture that enables smart devices to
have proximity access to cloud functionalities at the network edge, along the
cloud-to-things continuum. From the algorithmic perspective, key innovations
target online approaches adaptive to different degrees of nonstationarity in
IoT dynamics, and their scalable model-free implementation under limited
feedback that motivates blind or bandit approaches. The proposed framework
aspires to offer a stepping stone that leads to systematic designs and analysis
of task-specific learning and management schemes for IoT, along with a host of
new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive
and Scalable Communication Network
Keep Your Nice Friends Close, but Your Rich Friends Closer -- Computation Offloading Using NFC
The increasing complexity of smartphone applications and services necessitate
high battery consumption but the growth of smartphones' battery capacity is not
keeping pace with these increasing power demands. To overcome this problem,
researchers gave birth to the Mobile Cloud Computing (MCC) research area. In
this paper we advance on previous ideas, by proposing and implementing the
first known Near Field Communication (NFC)-based computation offloading
framework. This research is motivated by the advantages of NFC's short distance
communication, with its better security, and its low battery consumption. We
design a new NFC communication protocol that overcomes the limitations of the
default protocol; removing the need for constant user interaction, the one-way
communication restraint, and the limit on low data size transfer. We present
experimental results of the energy consumption and the time duration of two
computationally intensive representative applications: (i) RSA key generation
and encryption, and (ii) gaming/puzzles. We show that when the helper device is
more powerful than the device offloading the computations, the execution time
of the tasks is reduced. Finally, we show that devices that offload application
parts considerably reduce their energy consumption due to the low-power NFC
interface and the benefits of offloading.Comment: 9 pages, 4 tables, 13 figure
Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks
Edge Computing for Extreme Reliability and Scalability
The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
Runtime Management of Artificial Intelligence Applications for Smart Eyewears
Artificial Intelligence (AI) applications are gaining popularity as they
seamlessly integrate into end-user devices, enhancing the quality of life.
In recent years, there has been a growing focus on designing Smart EyeWear (SEW) that can optimize user experiences based on specific usage
domains. However, SEWs face limitations in computational capacity and
battery life. This paper investigates SEW and proposes an algorithm
to minimize energy consumption and 5G connection costs while ensuring high Quality-of-Experience. To achieve this, a management software,
based on Q-learning, offloads some Deep Neural Network (DNN) computations to the user’s smartphone and/or the cloud, leveraging the possibility
to partition the DNNs. Performance evaluation considers variability in 5G
and WiFi bandwidth as well as in the cloud latency. Results indicate execution time violations below 14%, demonstrating that the approach is
promising for efficient resource allocation and user satisfaction
- …