15,572 research outputs found
Application Management in Fog Computing Environments: A Taxonomy, Review and Future Directions
The Internet of Things (IoT) paradigm is being rapidly adopted for the
creation of smart environments in various domains. The IoT-enabled
Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry
4.0 and Agtech handle a huge volume of data and require data processing
services from different types of applications in real-time. The Cloud-centric
execution of IoT applications barely meets such requirements as the Cloud
datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog
computing}, an extension of Cloud at the edge network, can execute these
applications closer to data sources. Thus, Fog computing can improve
application service delivery time and resist network congestion. However, the
Fog nodes are highly distributed, heterogeneous and most of them are
constrained in resources and spatial sharing. Therefore, efficient management
of applications is necessary to fully exploit the capabilities of Fog nodes. In
this work, we investigate the existing application management strategies in Fog
computing and review them in terms of architecture, placement and maintenance.
Additionally, we propose a comprehensive taxonomy and highlight the research
gaps in Fog-based application management. We also discuss a perspective model
and provide future research directions for further improvement of application
management in Fog computing
Survivable Probability of Network Slicing with Random Physical Link Failure
The fifth generation of communication technology (5G) revolutionizes mobile
networks and the associated ecosystems through the integration of cross-domain
networks. Network slicing is an enabling technology for 5G as it provides
dynamic, on-demand, and reliable logical network slices (i.e., network
services) over a common physical network/infrastructure. Since a network slice
is subject to failures originated from disruptions, namely node or link
failures, in the physical infrastructure, our utmost interest is to evaluate
the reliability of a network slice before assigning it to customers. In this
paper, we propose an evaluation metric, \textit{survivable probability}, to
quantify the reliability of a network slice under random physical link
failure(s). We prove the existence of a \textit{base protecting spanning tree
set} which has the same survivable probability as that of a network slice. We
propose the necessary and sufficient conditions to identify a base protecting
spanning tree set and develop corresponding mathematical formulations, which
can be used to generate reliable network slices in the 5G environment. In
addition to proving the viability of our approaches with simulation results, we
also discuss how our problems and approaches are related to the Steiner tree
problems and present their computational complexity and approximability.Comment: 12 pages, 11 figure
Self-enforcing Game Theory-based Resource Allocation for LoRaWAN Assisted Public Safety Communications
Public safety networks avail to disseminate information during emergency
situations through its dedicated servers. Public safety networks accommodate
public safety communication (PSC) applications to track the location of its
utilizers and enable to sustain transmissions even in the crucial scenarios.
Despite that, if the traditional setups responsible for PSCs are unavailable,
it becomes prodigiously arduous to handle any of the safety applications, which
may cause havoc in the society. Dependence on a secondary network may assist to
solve such an issue. But, the secondary networks should be facilely deployable
and must not cause exorbitant overheads in terms of cost and operation. For
this, LoRaWAN can be considered as an ideal solution as it provides low power
and long-range communication. However, an excessive utilization of the
secondary network may result in high depletion of its own resources and can
lead to a complete shutdown of services, which is a quandary at hand. As a
solution, this paper proposes a novel network model via a combination of
LoRaWAN and traditional public safety networks, and uses a self-enforcing
agreement based game theory for allocating resources efficiently amongst the
available servers. The proposed approach adopts memory and energy constraints
as agreements, which are satisfied through Nash equilibrium. The numerical
results show that the proposed approach is capable of efficiently allocating
the resources with sufficiently high gains for resource conservation, network
sustainability, resource restorations and probability to continue at the
present conditions even in the complete absence of traditional Access Points
(APs) compared with a baseline scenario with no failure of nodes.Comment: 16 Pages, 11 Figures, 2 Table
CitizenGrid: An Online Middleware for Crowdsourcing Scientific Research
In the last few years, contributions of the general public in scientific
projects has increased due to the advancement of communication and computing
technologies. Internet played an important role in connecting scientists and
volunteers who are interested in participating in their scientific projects.
However, despite potential benefits, only a limited number of crowdsourcing
based large-scale science (citizen science) projects have been deployed due to
the complexity involved in setting them up and running them. In this paper, we
present CitizenGrid - an online middleware platform which addresses security
and deployment complexity issues by making use of cloud computing and
virtualisation technologies. CitizenGrid incentivises scientists to make their
small-to-medium scale applications available as citizen science projects by: 1)
providing a directory of projects through a web-based portal that makes
applications easy to discover; 2) providing flexibility to participate in,
monitor, and control multiple citizen science projects from a common interface;
3) supporting diverse categories of citizen science projects. The paper
describes the design, development and evaluation of CitizenGrid and its use
cases.Comment: 11 page
A Survey on Low Latency Towards 5G: RAN, Core Network and Caching Solutions
The fifth generation (5G) wireless network technology is to be standardized
by 2020, where main goals are to improve capacity, reliability, and energy
efficiency, while reducing latency and massively increasing connection density.
An integral part of 5G is the capability to transmit touch perception type
real-time communication empowered by applicable robotics and haptics equipment
at the network edge. In this regard, we need drastic changes in network
architecture including core and radio access network (RAN) for achieving
end-to-end latency on the order of 1 ms. In this paper, we present a detailed
survey on the emerging technologies to achieve low latency communications
considering three different solution domains: RAN, core network, and caching.
We also present a general overview of 5G cellular networks composed of software
defined network (SDN), network function virtualization (NFV), caching, and
mobile edge computing (MEC) capable of meeting latency and other 5G
requirements.Comment: Accepted in IEEE Communications Surveys and Tutorial
Software Defined Networking Enabled Wireless Network Virtualization: Challenges and Solutions
Next generation (5G) wireless networks are expected to support the massive
data and accommodate a wide range of services/use cases with distinct
requirements in a cost-effective, flexible, and agile manner. As a promising
solution, wireless network virtualization (WNV), or network slicing, enables
multiple virtual networks to share the common infrastructure on demand, and to
be customized for different services/use cases. This article focuses on
network-wide resource allocation for realizing WNV. Specifically, the
motivations, the enabling platforms, and the benefits of WNV, are first
reviewed. Then, resource allocation for WNV along with the technical challenges
is discussed. Afterwards, a software defined networking (SDN) enabled resource
allocation framework is proposed to facilitate WNV, including the key
procedures and the corresponding modeling approaches. Furthermore, a case study
is provided as an example of resource allocation in WNV. Finally, some open
research topics essential to WNV are discussed.Comment: 16 pages, 5 figures. To appear in IEEE Network Magazin
Adaptive Event Dispatching in Serverless Computing Infrastructures
Serverless computing is an emerging Cloud service model. It is currently
gaining momentum as the next step in the evolution of hosted computing from
capacitated machine virtualisation and microservices towards utility computing.
The term "serverless" has become a synonym for the entirely
resource-transparent deployment model of cloud-based event-driven distributed
applications. This work investigates how adaptive event dispatching can improve
serverless platform resource efficiency and contributes a novel approach that
allows for better scaling and fitting of the platform's resource consumption to
actual demand
A collaborative citizen science platform for real-time volunteer computing and games
Volunteer computing (VC) or distributed computing projects are common in the
citizen cyberscience (CCS) community and present extensive opportunities for
scientists to make use of computing power donated by volunteers to undertake
large-scale scientific computing tasks. Volunteer computing is generally a
non-interactive process for those contributing computing resources to a project
whereas volunteer thinking (VT) or distributed thinking, which allows
volunteers to participate interactively in citizen cyberscience projects to
solve human computation tasks. In this paper we describe the integration of
three tools, the Virtual Atom Smasher (VAS) game developed by CERN, LiveQ, a
job distribution middleware, and CitizenGrid, an online platform for hosting
and providing computation to CCS projects. This integration demonstrates the
combining of volunteer computing and volunteer thinking to help address the
scientific and educational goals of games like VAS. The paper introduces the
three tools and provides details of the integration process along with further
potential usage scenarios for the resulting platform.Comment: 12 pages, 13 figure
Reinforcement Learning-based Application Autoscaling in the Cloud: A Survey
Reinforcement Learning (RL) has demonstrated a great potential for
automatically solving decision-making problems in complex uncertain
environments. RL proposes a computational approach that allows learning through
interaction in an environment with stochastic behavior, where agents take
actions to maximize some cumulative short-term and long-term rewards. Some of
the most impressive results have been shown in Game Theory where agents
exhibited superhuman performance in games like Go or Starcraft 2, which led to
its gradual adoption in many other domains, including Cloud Computing.
Therefore, RL appears as a promising approach for Autoscaling in Cloud since it
is possible to learn transparent (with no human intervention), dynamic (no
static plans), and adaptable (constantly updated) resource management policies
to execute applications. These are three important distinctive aspects to
consider in comparison with other widely used autoscaling policies that are
defined in an ad-hoc way or statically computed as in solutions based on
meta-heuristics. Autoscaling exploits the Cloud elasticity to optimize the
execution of applications according to given optimization criteria, which
demands to decide when and how to scale-up/down computational resources, and
how to assign them to the upcoming processing workload. Such actions have to be
taken considering that the Cloud is a dynamic and uncertain environment.
Motivated by this, many works apply RL to the autoscaling problem in the Cloud.
In this work, we survey exhaustively those proposals from major venues, and
uniformly compare them based on a set of proposed taxonomies. We also discuss
open problems and prospective research in the area.Comment: 40 pages, 9 figure
iSTRICT: An Interdependent Strategic Trust Mechanism for the Cloud-Enabled Internet of Controlled Things
The cloud-enabled Internet of controlled things (IoCT) envisions a network of
sensors, controllers, and actuators connected through a local cloud in order to
intelligently control physical devices. Because cloud services are vulnerable
to advanced persistent threats (APTs), each device in the IoCT must
strategically decide whether to trust cloud services that may be compromised.
In this paper, we present iSTRICT, an interdependent strategic trust mechanism
for the cloud-enabled IoCT. iSTRICT is composed of three interdependent layers.
In the cloud layer, iSTRICT uses FlipIt games to conceptualize APTs. In the
communication layer, it captures the interaction between devices and the cloud
using signaling games. In the physical layer, iSTRICT uses optimal control to
quantify the utilities in the higher level games. Best response dynamics link
the three layers in an overall "game-of-games," for which the outcome is
captured by a concept called Gestalt Nash equilibrium (GNE). We prove the
existence of a GNE under a set of natural assumptions and develop an adaptive
algorithm to iteratively compute the equilibrium. Finally, we apply iSTRICT to
trust management for autonomous vehicles that rely on measurements from remote
sources. We show that strategic trust in the communication layer achieves a
worst-case probability of compromise for any attack and defense costs in the
cyber layer.Comment: To appear in IEEE Transactions on Information Forensics and Securit
- …