4,561 research outputs found
Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices
Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth
Generation (5G) mobile networks. MEC facilitates distributed cloud computing
capabilities and information technology service environment for applications
and services at the edges of mobile networks. This architectural modification
serves to reduce congestion, latency, and improve the performance of such edge
colocated applications and devices. In this paper, we demonstrate how reactive
service migration can be orchestrated for low-power MEC-enabled Internet of
Things (IoT) devices. Here, we use open-source Kubernetes as container
orchestration system. Our demo is based on traditional client-server system
from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As
the use case scenario, we post-process live video received over web real-time
communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1
handovers, demonstrating MEC-based software defined network (SDN). Now, edge
applications may reactively follow the UE within the radio access network
(RAN), expediting low-latency. The collected data is used to analyze the
benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end
(E2E) latency and power requirements of the UE are improved. We further discuss
the challenges of implementing such schemes and future research directions
therein
A STUDY ON DATA STREAMING IN FOG COMPUTING ENVIRONMENT
In lately years, data streaming is become more important day by day, considering technologies employed to servethat manner and share number of terminals within the system either direct or indirect interacting with them.Smart devices now play active role in the data streaming environment as well as fog and cloud compatibility. It is affectingthe data collectivity and appears clearly with the new technologies provided and the increase for the number of theusers of such systems. This is due to the number of the users and resources available system start to employ the computationalpower to the fog for moving the computational power to the network edge. It is adopted to connect system that streamed dataas an object. Those inter-connected objects are expected to be producing more significant data streams, which are produced atunique rates, in some cases for being analyzed nearly in real time. In the presented paper a survey of data streaming systemstechnologies is introduced. It clarified the main notions behind big data stream concepts as well as fog computing. From thepresented study, the industrial and research communities are capable of gaining information about requirements for creatingFog computing environment with a clearer view about managing resources in the Fog.The main objective of this paper is to provide short brief and information about Data Streaming in Fog ComputingEnvironment with explaining the major research field within this meaning
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
End-to-end informed VM selection in compute clouds
The selection of resources, particularly VMs, in current public IaaS clouds is usually done in a blind fashion, as cloud users do not have much information about resource consumption by co-tenant third-party tasks. In particular, communication patterns can play a significant part in cloud application performance and responsiveness, specially in the case of novel latencysensitive applications, increasingly common in today’s clouds. Thus, herein we propose an end-to-end approach to the VM allocation problem using policies based uniquely on round-trip time measurements between VMs. Those become part of a userlevel ‘Recommender Service’ that receives VM allocation requests with certain network-related demands and matches them to a suitable subset of VMs available to the user within the cloud. We propose and implement end-to-end algorithms for VM selection that cover desirable profiles of communications between VMs in distributed applications in a cloud setting, such as profiles with prevailing pair-wise, hub-and-spokes, or clustered communication patterns between constituent VMs. We quantify the expected benefits from deploying our Recommender Service by comparing our informed VM allocation approaches to conventional, random allocation methods, based on real measurements of latencies between Amazon EC2 instances. We also show that our approach is completely independent from cloud architecture details, is adaptable to different types of applications and workloads, and is lightweight and transparent to cloud providers.This work is supported in part by the National Science
Foundation under grant CNS-0963974
A manifesto for future generation cloud computing: research directions for the next decade
The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing
Creation of a Cloud-Native Application: Building and operating applications that utilize the benefits of the cloud computing distribution approach
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementVMware is a world-renowned company in the field of cloud infrastructure and digital workspace technology which supports organizations in digital transformations. VMware accelerates digital transformation for evolving IT environments by empowering clients to adopt a software-defined strategy towards their business and information technology. Previously present in the private cloud segment, the company has recently focused on developing offers related to the public cloud.
Comprehending how to devise cloud-compatible systems has become increasingly crucial in the present times. Cloud computing is rapidly evolving from a specialized technology favored by tech-savvy companies and startups to the cornerstone on which enterprise systems are constructed for future growth. To stay competitive in the current market, both big and small organizations are adopting cloud architectures and methodologies.
As a member of the technical pre-sales team, the main goal of my internship was the design, development, and deployment of a cloud native application and therefore this will be the subject of my internship report. The application is intended to interface with an existing one and demonstrates in question the possible uses of VMware's virtualization infrastructure and automation offerings. Since its official release, the application has already been presented to various existing and prospective customers and at conferences. The purpose of this work is to provide a permanent record of my internship experience at VMware. Through this undertaking, I am able to retrospect on the professional facets of my internship experience and the competencies I gained during the journey. This work is a descriptive and theoretical reflection, methodologically oriented towards the development of a cloud-native application in the context of my internship in the system engineering team at VMware. The scientific content of the internship of the report focuses on the benefits - not limited to scalability and maintainability - to move from a monolithic architecture to microservices
Creating Intelligent Computational Edge through Semantic Mediation
This research proposes semantic mediation based on reasoning and the first order logic for mediating the best possible configuration of Computational Edge, relevant for software applications which may benefit for running computations with proximity to their data sources. The mediation considers the context in which these applications exist and exploits the semantic of that context for decision making on where computational elements should reside and which data they should use. The application of semantic mediation could address the initiative to accommodate algorithms from predictive and learning technologies, push AI towards computational edges and potentially contribute towards creating a computing continuum
Real-time ECG Monitoring using Compressive sensing on a Heterogeneous Multicore Edge-Device
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In a typical ambulatory health monitoring systems, wearable medical sensors
are deployed on the human body to continuously collect and transmit physiological
signals to a nearby gateway that forward the measured data to the
cloud-based healthcare platform. However, this model often fails to respect the
strict requirements of healthcare systems. Wearable medical sensors are very
limited in terms of battery lifetime, in addition, the system reliance on a cloud
makes it vulnerable to connectivity and latency issues. Compressive sensing
(CS) theory has been widely deployed in electrocardiogramme ECG monitoring
application to optimize the wearable sensors power consumption. The proposed
solution in this paper aims to tackle these limitations by empowering a gatewaycentric
connected health solution, where the most power consuming tasks are
performed locally on a multicore processor. This paper explores the efficiency
of real-time CS-based recovery of ECG signals on an IoT-gateway embedded
with ARM’s big.littleTM multicore for different signal dimension and allocated
computational resources. Experimental results show that the gateway is able
to reconstruct ECG signals in real-time. Moreover, it demonstrates that using
a high number of cores speeds up the execution time and it further optimizes
energy consumption. The paper identifies the best configurations of resource
allocation that provides the optimal performance. The paper concludes that
multicore processors have the computational capacity and energy efficiency to
promote gateway-centric solution rather than cloud-centric platforms
- …