78,885 research outputs found
Innovative techniques for deployment of microservices in cloud-edge environment
PhD ThesisThe evolution of microservice architecture allows complex applications to be structured
into independent modular components (microservices) making them easier to develop
and manage. Complemented with containers, microservices can be deployed across
any cloud and edge environment. Although containerized microservices are getting
popular in industry, less research is available specially in the area of performance
characterization and optimized deployment of microservices.
Depending on the application type (e.g. web, streaming) and the provided functionalities
(e.g. ltering, encryption/decryption, storage), microservices are heterogeneous
with speci c functional and Quality of Service (QoS) requirements. Further, cloud
and edge environments are also complex with a huge number of cloud providers and
edge devices along with their host con gurations. Due to these complexities, nding
a suitable deployment solution for microservices becomes challenging.
To handle the deployment of microservices in cloud and edge environments, this thesis
presents multilateral research towards microservice performance characterization,
run-time evaluation and system orchestration. Considering a variety of applications,
numerous algorithms and policies have been proposed, implemented and prototyped.
The main contributions of this thesis are given below:
Characterizes the performance of containerized microservices considering various
types of interference in the cloud environment.
Proposes and models an orchestrator, SDBO for benchmarking simple webapplication
microservices in a multi-cloud environment. SDBO is validated using
an e-commerce test web-application.
Proposes and models an advanced orchestrator, GeoBench for the deployment of
complex web-application microservices in a multi-cloud environment. GeoBench
is validated using a geo-distributed test web-application.
- i -
Proposes and models a run-time deployment framework for distributed streaming
application microservices in a hybrid cloud-edge environment. The model is
validated using a real-world healthcare analytics use case for human activity
recognition.
Secure Cloud-Edge Deployments, with Trust
Assessing the security level of IoT applications to be deployed to
heterogeneous Cloud-Edge infrastructures operated by different providers is a
non-trivial task. In this article, we present a methodology that permits to
express security requirements for IoT applications, as well as infrastructure
security capabilities, in a simple and declarative manner, and to automatically
obtain an explainable assessment of the security level of the possible
application deployments. The methodology also considers the impact of trust
relations among different stakeholders using or managing Cloud-Edge
infrastructures. A lifelike example is used to showcase the prototyped
implementation of the methodology
Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research
This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) are performed and results confirm backup automation is completed swiftly and is reliable for data-intensive research. The data recovery result confirms that execution time is in proportion to quantity of recovered data, but the failure rate increases in an exponential manner. The data migration result confirms execution time is in proportion to disk volume of migrated data, but again the failure rate increases in an exponential manner. In addition, benefits of CCAF are illustrated using several bioinformatics examples such as tumour modelling, brain imaging, insulin molecules and simulations for medical training. Our Cloud Storage solution described here offers cost reduction, time-saving and user friendliness
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Towards an Automatic Microservices Manager for Hybrid Cloud Edge Environments
Cloud computing came to make computing resources easier to access thus helping a
faster deployment of applications/services benefiting from the scalability provided by
the service providers. It has been registered an exponential growth of the data volume
received by the cloud. This is due to the fact that almost every device used in everyday
life are connected to the internet sharing information in a global scale (ex: smartwatches,
clocks, cars, industrial equipment’s). Increasing the data volume results in an increased
latency in client applications resulting in the degradation of the QoS (Quality of service).
With these problems, hybrid systems were born by integrating the cloud resources
with the various edge devices between the cloud and edge, Fog/Edge computation. These
devices are very heterogeneous, with different resources capabilities (such as memory
and computational power), and geographically distributed.
Software architectures also evolved and microservice architecture emerged to make
application development more flexible and increase their scalability. The Microservices
architecture comprehends decomposing monolithic applications into small services each
one with a specific functionality and that can be independently developed, deployed and
scaled. Due to their small size, microservices are adquate for deployment on Hybrid
Cloud/Edge infrastructures. However, the heterogeneity of those deployment locations
makes microservices’ management and monitoring rather complex. Monitoring, in particular,
is essential when considering that microservices may be replicated and migrated
in the cloud/edge infrastructure.
The main problem this dissertation aims to contribute is to build an automatic system
of microservices management that can be deployed in hybrid infrastructures cloud/fog
computing. Such automatic system will allow edge enabled applications to have an
adaptive deployment at runtime in response to variations inworkloads and computational
resources available. Towards this end, this work is a first step on integrating two existing
projects that combined may support an automatic system. One project does the automatic
management of microservices but uses only an heavy monitor, Prometheus, as a cloud
monitor. The second project is a light adaptive monitor. This thesis integrates the light
monitor into the automatic manager of microservices.A computação na Cloud surgiu como forma de simplificar o acesso aos recursos computacionais,
permitindo um deployment mais rápido das aplicações e serviços como resultado
da escalabilidade suportada pelos provedores de serviços.
Computação na cloud surgiu para facilitar o acesso aos recursos de computação provocando
um facultamento no deployment de aplicações/serviços sendo benéfico para a
escalabilidade fornecida pelos provedores de serviços. Tem-se registado um crescimento
exponencial do volume de data que é recebido pela cloud. Este aumento deve-se ao facto de
quase todos os dispositivos utilizados no nosso quotidiano estarem conectados à internet
(exemplos destes são, relogios, maquinas industriais, carros). Este aumento no volume de
dados resulta num aumento da latência para as aplicações cliente, resultando assim numa
degradação na qualidade de serviço QoS.
Com estes problemas, nasceram os sistemas hÃbridos, nascidos pela integração dos
recursos de cloud com os variados dispositivos presentes no caminho entre a cloud e
a periferia denominando-se computação na Edge/Fog (Computação na periferia). Estes
dispositivos apresentam uma grande heterogeneidade e são geograficamente muito
distribuÃdos.
As arquitecturas dos sistemas também evoluÃram emergindo a arquitectura de micro
serviços que permitem tornar o desenvolvimento de aplicações não só mais flexivel
como para aumentar a sua escalabilidade. A arquitetura de micro serviços consiste na
decomposição de aplicações monolÃticas em pequenos serviços, onde cada um destes
possuà uma funcionalidade especÃfica e que pode ser desenvolvido, lançado e migrado
de forma independente. Devido ao seu tamanho os micro serviços são adequados para
serem lançados em ambientes de infrastructuras hÃbridas (cloud e periferia). No entanto,
a heterogeneidade da localização para serem lançados torna a gestão e monitorização
de micro serviços bastante mais complexa. A monitorização, em particular, é essencial
quando consideramos que os micro serviços podem ser replicados e migrados nestas
infrastruturas de cloud e periferia (Edge).
O problema abordado nesta dissertação é contribuir para a construção de um sistema
automático de gestão de micro serviços que podem ser lançados em estruturas hibridas.
Este sistema automático irá tornar possÃvel à s aplicações que estão na edge possuÃrem um
deploy adaptativo enquanto estão em execução, como resposta às variações dos recursos
computacionais disponÃveis e suas cargas. Para chegar a este fim, este trabalho será o primeiro passo na integração de dois projectos já existentes que, juntos poderão suportar
umsistema automático. Umdeles realiza a gestão automática de micro serviços mas utiliza
apenas o Prometheus como monitor na cloud, enquanto o segundo projecto é um monitor
leve adaptativo. Esta tese integra então um monitor leve com um gestor automático de
micro serviços
System Abstractions for Scalable Application Development at the Edge
Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler
Challenges for the comprehensive management of cloud services in a PaaS framework
The 4CaaSt project aims at developing a PaaS framework that enables flexible definition, marketing, deployment and management of Cloud-based services and applications. The major innovations proposed by 4CaaSt are the blueprint and its lifecycle management, a one stop shop for Cloud services and a PaaS level resource management featuring elasticity. 4CaaSt also provides a portfolio of ready to use Cloud native services and Cloud-aware immigrant technologies
Migrating to Cloud-Native Architectures Using Microservices: An Experience Report
Migration to the cloud has been a popular topic in industry and academia in
recent years. Despite many benefits that the cloud presents, such as high
availability and scalability, most of the on-premise application architectures
are not ready to fully exploit the benefits of this environment, and adapting
them to this environment is a non-trivial task. Microservices have appeared
recently as novel architectural styles that are native to the cloud. These
cloud-native architectures can facilitate migrating on-premise architectures to
fully benefit from the cloud environments because non-functional attributes,
like scalability, are inherent in this style. The existing approaches on cloud
migration does not mostly consider cloud-native architectures as their
first-class citizens. As a result, the final product may not meet its primary
drivers for migration. In this paper, we intend to report our experience and
lessons learned in an ongoing project on migrating a monolithic on-premise
software architecture to microservices. We concluded that microservices is not
a one-fit-all solution as it introduces new complexities to the system, and
many factors, such as distribution complexities, should be considered before
adopting this style. However, if adopted in a context that needs high
flexibility in terms of scalability and availability, it can deliver its
promised benefits
- …