4,263 research outputs found
Mobile Edge Computing, Fog et al.: A Survey and Analysis of Security Threats and Challenges
For various reasons, the cloud computing paradigm is unable to meet certain
requirements (e.g. low latency and jitter, context awareness, mobility support)
that are crucial for several applications (e.g. vehicular networks, augmented
reality). To fulfil these requirements, various paradigms, such as fog
computing, mobile edge computing, and mobile cloud computing, have emerged in
recent years. While these edge paradigms share several features, most of the
existing research is compartmentalised; no synergies have been explored. This
is especially true in the field of security, where most analyses focus only on
one edge paradigm, while ignoring the others. The main goal of this study is to
holistically analyse the security threats, challenges, and mechanisms inherent
in all edge paradigms, while highlighting potential synergies and venues of
collaboration. In our results, we will show that all edge paradigms should
consider the advances in other paradigms.Comment: In press, accepted manuscript: Future Generation Computer System
NFV and SDN - Key Technology Enablers for 5G Networks
Communication networks are undergoing their next evolutionary step towards
5G. The 5G networks are envisioned to provide a flexible, scalable, agile and
programmable network platform over which different services with varying
requirements can be deployed and managed within strict performance bounds. In
order to address these challenges a paradigm shift is taking place in the
technologies that drive the networks, and thus their architecture. Innovative
concepts and techniques are being developed to power the next generation mobile
networks. At the heart of this development lie Network Function Virtualization
and Software Defined Networking technologies, which are now recognized as being
two of the key technology enablers for realizing 5G networks, and which have
introduced a major change in the way network services are deployed and
operated. For interested readers that are new to the field of SDN and NFV this
paper provides an overview of both these technologies with reference to the 5G
networks. Most importantly it describes how the two technologies complement
each other and how they are expected to drive the networks of near future.Comment: This is an accepted version and consists of 11 pages, 9 figures and
32 reference
A Taxonomy and Future Directions for Sustainable Cloud Computing: 360 Degree View
The cloud computing paradigm offers on-demand services over the Internet and
supports a wide variety of applications. With the recent growth of Internet of
Things (IoT) based applications the usage of cloud services is increasing
exponentially. The next generation of cloud computing must be energy-efficient
and sustainable to fulfil the end-user requirements which are changing
dynamically. Presently, cloud providers are facing challenges to ensure the
energy efficiency and sustainability of their services. The usage of large
number of cloud datacenters increases cost as well as carbon footprints, which
further effects the sustainability of cloud services. In this paper, we propose
a comprehensive taxonomy of sustainable cloud computing. The taxonomy is used
to investigate the existing techniques for sustainability that need careful
attention and investigation as proposed by several academic and industry
groups. Further, the current research on sustainable cloud computing is
organized into several categories: application design, sustainability metrics,
capacity planning, energy management, virtualization, thermal-aware scheduling,
cooling management, renewable energy and waste heat utilization. The existing
techniques have been compared and categorized based on the common
characteristics and properties. A conceptual model for sustainable cloud
computing has been proposed along with discussion on future research
directions.Comment: 68 pages, 38 figures, ACM Computing Surveys, 201
Isolate First, Then Share: a New OS Architecture for Datacenter Computing
This paper presents the "isolate first, then share" OS model in which the
processor cores, memory, and devices are divided up between disparate OS
instances and a new abstraction, subOS, is proposed to encapsulate an OS
instance that can be created, destroyed, and resized on-the-fly. The intuition
is that this avoids shared kernel states between applications, which in turn
reduces performance loss caused by contention. We decompose the OS into the
supervisor and several subOSes running at the same privilege level: a subOS
directly manages physical resources, while the supervisor can create, destroy,
resize a subOS on-the-fly. The supervisor and subOSes have few state sharing,
but fast inter-subOS communication mechanisms are provided on demand.
We present the first implementation, RainForest, which supports unmodified
Linux binaries. Our comprehensive evaluation shows RainForest outperforms Linux
with four different kernels, LXC, and Xen in terms of worst-case and average
performance most of time when running a large number of benchmarks. The source
code is available soon.Comment: 14 pages, 13 figures, 5 table
Towards a Virtual Data Centre for Classics
The paper presents some of our work on integrating datasets in Classics. We
present the results of various projects we had in this domain. The conclusions
from LaQuAT concerned limitations to the approach rather than solutions. The
relational model followed by OGSA-DAI was more effective for resources that
consist primarily of structured data (which we call data-centric) rather than
for largely unstructured text (which we call text-centric), which makes up a
significant component of the datasets we were using. This approach was,
moreover, insufficiently flexible to deal with the semantic issues. The gMan
project, on the other hand, addressed these problems by virtualizing data
resources using full-text indexes, which can then be used to provide different
views onto the collections and services that more closely match the sort of
information organization and retrieval activities found in the humanities, in
an environment that is more interactive, researcher-focused, and
researcher-driven
A Novel architecture for improving performance under virtualized environments
Even though virtualization provides a lot of advantages in cloud computing,
it does not provide effective performance isolation between the virtualization
machines. In other words, the performance may get affected due the
interferences caused by co-virtual machines. This can be achieved by the proper
management of resource allocations between the Virtual Machines running
simultaneously. This paper aims at providing a proposed novel architecture that
is based on Fast Genetic K-means++ algorithm and test results show positive
improvements in terms of performance improvements over a similar existing
approach
Quantitative Analysis of Active Cyber Defenses Based on Temporal Platform Diversity
Active cyber defenses based on temporal platform diversity have been proposed
as way to make systems more resistant to attacks. These defenses change the
properties of the platforms in order to make attacks more complicated.
Unfortunately, little work has been done on measuring the effectiveness of
these defenses. In this work, we use four different approaches to
quantitatively analyze these defenses; an abstract analysis studies the
algebraic models of a temporal platform diversity system; a set of experiments
on a test bed measures the metrics of interest for the system; a game theoretic
analysis studies the impact of preferential selection of platforms and derives
an optimal strategy; finally, a set of simulations evaluates the metrics of
interest on the models. Our results from these approaches all agree and yet are
counter-intuitive. We show that although platform diversity can mitigate some
attacks, it can be detrimental for others. We also illustrate that the benefit
from these systems heavily depends on their threat model and that the
preferential selection of platforms can achieve better protection
vDLT: A Service-Oriented Blockchain System with Virtualization and Decoupled Management/Control and Execution
A wide range of services and applications can be improved and/or solved by
using distributed ledger technology (DLT). These services and applications have
widely varying quality of service (QoS) requirements. However, most existing
DLT systems do not distinguish different QoS requirements, resulting in
significant performance issues such as poor scalability and high cost. In this
work, we present vDLT -- a service-oriented blockchain system with
virtualization and decoupled management/control and execution. In vDLT,
services and applications are classified into different classes according to
their QoS requirements, including confirmation latency, throughput, cost,
security, privacy, etc. This is a paradigm shift from the existing
"blockchain-oriented" DLT systems to next generation "service-oriented" DLT
systems. Different QoS requirements are fulfilled by advanced schemes inspired
by the development of the traditional Internet, including classification,
queuing, virtualization, resource allocation and orchestration, and
hierarchical architecture. In addition, management/control and execution of
smart contracts are decoupled to support QoS provisioning, improve
decentralization, and facilitate evolution in vDLT. With virtualization,
different virtual DLT systems with widely varying characteristics can be
dynamically created and operated to accommodate different services and
applications
Efficient Support of Big Data Storage Systems on the Cloud
Due to its advantages over traditional data centers, there has been a rapid
growth in the usage of cloud infrastructures. These include public clouds
(e.g., Amazon EC2), or private clouds, such as clouds deployed using OpenStack.
A common factor in many of the well known infrastructures, for example
OpenStack and CloudStack, is that networked storage is used for storage of
persistent data. However, traditional Big Data systems, including Hadoop, store
data in commodity local storage for reasons of high performance and low cost.
We present an architecture for supporting Hadoop on Openstack using local
storage. Subsequently, we use benchmarks on Openstack and Amazon to show that
for supporting Hadoop, local storage has better performance and lower cost. We
conclude that cloud systems should support local storage for persistent data
(in addition to networked storage) so as to provide efficient support for
Hadoop and other Big Data systemsComment: Presented at 2nd International Workshop on Cloud Computing
Applications (ICWA) during IEEE International Conference on High Performance
Computing (HiPC) 201
FlashAbacus: A Self-Governing Flash-Based Accelerator for Low-Power Systems
Energy efficiency and computing flexibility are some of the primary design
constraints of heterogeneous computing. In this paper, we present FlashAbacus,
a data-processing accelerator that self-governs heterogeneous kernel executions
and data storage accesses by integrating many flash modules in lightweight
multiprocessors. The proposed accelerator can simultaneously process data from
different applications with diverse types of operational functions, and it
allows multiple kernels to directly access flash without the assistance of a
host-level file system or an I/O runtime library. We prototype FlashAbacus on a
multicore-based PCIe platform that connects to FPGA-based flash controllers
with a 20 nm node process. The evaluation results show that FlashAbacus can
improve the bandwidth of data processing by 127%, while reducing energy
consumption by 78.4%, as compared to a conventional method of heterogeneous
computing. \blfootnote{This paper is accepted by and will be published at 2018
EuroSys. This document is presented to ensure timely dissemination of scholarly
and technical work.Comment: This paper is published at the 13th edition of EuroSy
- …