65 research outputs found
Microservices-based IoT Applications Scheduling in Edge and Fog Computing: A Taxonomy and Future Directions
Edge and Fog computing paradigms utilise distributed, heterogeneous and
resource-constrained devices at the edge of the network for efficient
deployment of latency-critical and bandwidth-hungry IoT application services.
Moreover, MicroService Architecture (MSA) is increasingly adopted to keep up
with the rapid development and deployment needs of the fast-evolving IoT
applications. Due to the fine-grained modularity of the microservices along
with their independently deployable and scalable nature, MSA exhibits great
potential in harnessing both Fog and Cloud resources to meet diverse QoS
requirements of the IoT application services, thus giving rise to novel
paradigms like Osmotic computing. However, efficient and scalable scheduling
algorithms are required to utilise the said characteristics of the MSA while
overcoming novel challenges introduced by the architecture. To this end, we
present a comprehensive taxonomy of recent literature on microservices-based
IoT applications scheduling in Edge and Fog computing environments.
Furthermore, we organise multiple taxonomies to capture the main aspects of the
scheduling problem, analyse and classify related works, identify research gaps
within each category, and discuss future research directions.Comment: 35 pages, 10 figures, submitted to ACM Computing Survey
Managing Service-Heterogeneity using Osmotic Computing
Computational resource provisioning that is closer to a user is becoming
increasingly important, with a rise in the number of devices making continuous
service requests and with the significant recent take up of latency-sensitive
applications, such as streaming and real-time data processing. Fog computing
provides a solution to such types of applications by bridging the gap between
the user and public/private cloud infrastructure via the inclusion of a "fog"
layer. Such approach is capable of reducing the overall processing latency, but
the issues of redundancy, cost-effectiveness in utilizing such computing
infrastructure and handling services on the basis of a difference in their
characteristics remain. This difference in characteristics of services because
of variations in the requirement of computational resources and processes is
termed as service heterogeneity. A potential solution to these issues is the
use of Osmotic Computing -- a recently introduced paradigm that allows division
of services on the basis of their resource usage, based on parameters such as
energy, load, processing time on a data center vs. a network edge resource.
Service provisioning can then be divided across different layers of a
computational infrastructure, from edge devices, in-transit nodes, and a data
center, and supported through an Osmotic software layer. In this paper, a
fitness-based Osmosis algorithm is proposed to provide support for osmotic
computing by making more effective use of existing Fog server resources. The
proposed approach is capable of efficiently distributing and allocating
services by following the principle of osmosis. The results are presented using
numerical simulations demonstrating gains in terms of lower allocation time and
a higher probability of services being handled with high resource utilization.Comment: 7 pages, 4 Figures, International Conference on Communication,
Management and Information Technology (ICCMIT 2017), At Warsaw, Poland, 3-5
April 2017, http://www.iccmit.net/ (Best Paper Award
Monitoring in Hybrid Cloud-Edge Environments
The increasing number of mobile and IoT(Internet of Things) devices accessing cloud
services contributes to a surge of requests towards the Cloud and consequently, higher
latencies. This is aggravated by the possible congestion of the communication networks
connecting the end devices and remote cloud datacenters, due to the large data volume
generated at the Edge (e.g. in the domains of smart cities, smart cars, etc.). One solution
for this problem is the creation of hybrid Cloud/Edge execution platforms composed of
computational nodes located in the periphery of the system, near data producers and consumers,
as a way to complement the cloud resources. These edge nodes offer computation
and data storage resources to accommodate local services in order to ensure rapid responses
to clients (enhancing the perceived quality of service) and to filter data, reducing
the traffic volume towards the Cloud. Usually these nodes (e.g. ISP access points and onpremises
servers) are heterogeneous, geographically distributed, and resource-restricted
(including in communication networks), which increase their management’s complexity.
At the application level, the microservices paradigm, represented by applications composed
of small, loosely coupled services, offers an adequate and flexible solution to design
applications that may explore the limited computational resources in the Edge.
Nevertheless, the inherent difficult management of microservices within such complex
infrastructure demands an agile and lightweight monitoring system that takes into
account the Edge’s limitations, which goes behind traditional monitoring solutions at the
Cloud. Monitoring in these new domains is not a simple process since it requires supporting
the elasticity of the monitored system, the dynamic deployment of services and,
moreover, doing so without overloading the infrastructure’s resources with its own computational
requirements and generated data. Towards this goal, this dissertation presents
an hybrid monitoring architecture where the heavier (resource-wise) components reside
in the Cloud while the lighter (computationally less demanding) components reside in
the Edge. The architecture provides relevant monitoring functionalities such as metrics’
acquisition, their analysis and mechanisms for real-time alerting. The objective is the efficient use of computational resources in the infrastructure while guaranteeing an agile
delivery of monitoring data where and when it is needed.Tem-se vindo a verificar um aumento significativo de dispositivos móveis e do domÃnio
IoT(Internet of Things) em áreas emergentes como Smart Cities, Smart Cars, etc., que
fazem pedidos a serviços localizados normalmente na Cloud, muitas vezes a partir de
locais remotos. Como consequência, prevê-se um aumento da latência no processamento
destes pedidos, que poderá ser agravado pelo congestionamento dos canais de comunicação,
da periferia até aos centros de dados. Uma forma de solucionar este problema
passa pela criação de sistemas hÃbridos Cloud/Edge, compostos por nós computacionais
que estão localizados na periferia do sistema, perto dos produtores e consumidores de
dados, complementando assim os recursos computacionais da Cloud. Os nós da Edge
permitem não só alojar dados e computações, garantindo uma resposta mais rápida aos
clientes e uma melhor qualidade do serviço, como também permitem filtrar alguns dos
dados, evitando deste modo transferências de dados desnecessárias para o núcleo do sistema.
Contudo, muitos destes nós (e.g. pontos de acesso, servidores proprietários) têm
uma capacidade limitada, são bastante heterogéneos e/ou encontram-se espalhados geograficamente,
o que dificulta a gestão dos recursos. O paradigma de micro-serviços,
representado por aplicações compostas por serviços de reduzida dimensão, desacoplados
na sua funcionalidade e que comunicam por mensagens, fornece uma solução adequada
para explorar os recursos computacionais na periferia.
No entanto, o mapeamento adequado dos micro-serviços na infra-estrutura, além de
ser complexo, é difÃcil de gerir e requer um sistema de monitorização ligeiro e ágil, que
considere as capacidades limitadas da infra-estrutura de suporte na periferia. A monitorização
não é um processo simples pois deve possibilitar a elasticidade do sistema, tendo
em conta as adaptações de "deployment", e sem sobrecarregar os recursos computacionais
ou de rede. Este trabalho apresenta uma arquitectura de monitorização hÃbrida, com
componentes de maior complexidade na Cloud e componentes mais simples na Edge. A
arquitectura fornece funcionalidades importantes de monitorização, como a recolha de métricas variadas, a sua análise e alertas em tempo real. O objetivo é rentabilizar os recursos
computacionais garantindo a entrega dos dados mais relevantes quando necessário
Real-Time QoS Monitoring and Anomaly Detection on Microservice-based Applications in Cloud-Edge Infrastructure
Ph. D. Thesis.Microservices have emerged as a new approach for developing and deploying cloud
applications that require higher levels of agility, scale, and reliability. A microservicebased
cloud application architecture advocates decomposition of monolithic application
components into independent software components called \microservices". As the
independent microservices can be developed, deployed, and updated independently of
each other, it leads to complex run-time performance monitoring and management
challenges. The deployment environment for microservices in multi-cloud environments
is very complex as there are numerous components running in heterogeneous
environments (VM/container) and communicating frequently with each other using
REST-based/REST-less APIs. In some cases, multiple components can also be executed
inside a VM/container making any failure or anomaly detection very complicated.
It is necessary to monitor the performance variation of all the service components
to detect any reason for failure.
Microservice and container architecture allows to design loose-coupled services and run
them in a lightweight runtime environment for more e cient scaling. Thus, containerbased
microservice deployment is now the standard model for hosting cloud applications
across industries. Despite the strongest scalability characteristic of this model
which opens the doors for further optimizations in both application structure and
performance, such characteristic adds an additional level of complexity to monitoring
application performance. Performance monitoring system can lead to severe application
outages if it is not able to successfully and quickly detecting failures and localizing
their causes. Machine learning-based techniques have been applied to detect anomalies
in microservice-based cloud-based applications. The existing research works used
di erent tracking algorithms to search the root cause if anomaly observed behaviour.
However, linking the observed failures of an application with their root causes by the
use of these techniques is still an open research problem.
Osmotic computing is a new IoT application programming paradigm that's driven
by the signi cant increase in resource capacity/capability at the network edge, along
with support for data transfer protocols that enable such resources to interact more
seamlessly with cloud-based services. Much of the di culty in Quality of Service (QoS)
and performance monitoring of IoT applications in an osmotic computing environment
is due to the massive scale and heterogeneity (IoT + edge + cloud) of computing
environments.
To handle monitoring and anomaly detection of microservices in cloud and edge datacenters,
this thesis presents multilateral research towards monitoring and anomaly
detection on microservice-based applications performance in cloud-edge infrastructure.
The key contributions of this thesis are as following:
• It introduces a novel system, Multi-microservices Multi-virtualization Multicloud
monitoring (M3 ) that provides a holistic approach to monitor the performance
of microservice-based application stacks deployed across multiple cloud
data centers.
• A framework forMonitoring, Anomaly Detection and Localization System (MADLS)
which utilizes a simpli ed approach that depends on commonly available metrics
o ering a simpli ed deployment environment for the developer.
• Developing a uni ed monitoring model for cloud-edge that provides an IoT application
administrator with detailed QoS information related to microservices
deployed across cloud and edge datacenters.Royal Embassy of Saudi Arabia Cultural
Bureau in London, government of Saudi Arabi
Quality of Service-aware matchmaking for adaptive microservice-based applications
Applications that make use of Internet of Things (IoT) can capture an enormous amount of raw data from sensors and actuators, which is frequently transmitted to cloud data centers for processing and analysis. However, due to varying and unpredictable data generation rates and network latency, this can lead to a performance bottleneck for data processing. With the emergence of fog and edge computing hosted microservices, data processing could be moved towards the network edge. We propose a new method for continuous deployment and adaptation of multi-tier applications along edge, fog, and cloud tiers by considering resource properties and non-functional requirements (e.g., operational cost, response time and latency etc.). The proposed approach supports matchmaking of application and Cloud-To-Things infrastructure based on a subgraph pattern matching (P-Match) technique. Results show that the proposed approach improves resource utilization and overall application Quality of Service. The approach can also be integrated into software engineering workbenches for the creation and deployment of cloud-native applications, enabling partitioning of an application across the multiple infrastructure tiers outlined above
Contribución a la estimulación del uso de soluciones Cloud Computing: Diseño de un intermediador de servicios Cloud para fomentar el uso de ecosistemas distribuidos digitales confiables, interoperables y de acuerdo a la legalidad. Aplicación en entornos multi-cloud.
184 p.El objetivo del trabajo de investigación presentado en esta tesis es facilitar a los desarrolladores y operadores de aplicaciones desplegadas en múltiples Nubes el descubrimiento y la gestión de los diferentes servicios de Computación, soportando su reutilización y combinación, para generar una red de servicios interoperables, que cumplen con las leyes y cuyos acuerdos de nivel de servicio pueden ser evaluados de manera continua. Una de las contribuciones de esta tesis es el diseño y desarrollo de un bróker de servicios de Computación llamado ACSmI (Advanced Cloud Services meta-Intermediator). ACSmI permite evaluar el cumplimiento de los acuerdos de nivel de servicio incluyendo la legislación. ACSmI también proporciona una capa de abstracción intermedia para los servicios de Computación donde los desarrolladores pueden acceder fácilmente a un catálogo de servicios acreditados y compatibles con los requisitos no funcionales establecidos.Además, este trabajo de investigación propone la caracterización de las aplicaciones nativas multiNube y el concepto de "DevOps extendido" especialmente pensado para este tipo de aplicaciones. El concepto "DevOps extendido" pretende resolver algunos de los problemas actuales del diseño, desarrollo, implementación y adaptación de aplicaciones multiNube, proporcionando un enfoque DevOps novedoso y extendido para la adaptación de las prácticas actuales de DevOps al paradigma multiNube
CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)
CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)
MicroFog: A Framework for Scalable Placement of Microservices-based IoT Applications in Federated Fog Environments
MicroService Architecture (MSA) is gaining rapid popularity for developing
large-scale IoT applications for deployment within distributed and
resource-constrained Fog computing environments. As a cloud-native application
architecture, the true power of microservices comes from their loosely coupled,
independently deployable and scalable nature, enabling distributed placement
and dynamic composition across federated Fog and Cloud clusters. Thus, it is
necessary to develop novel microservice placement algorithms that utilise these
microservice characteristics to improve the performance of the applications.
However, existing Fog computing frameworks lack support for integrating such
placement policies due to their shortcomings in multiple areas, including MSA
application placement and deployment across multi-fog multi-cloud environments,
dynamic microservice composition across multiple distributed clusters,
scalability of the framework, support for deploying heterogeneous microservice
applications, etc. To this end, we design and implement MicroFog, a Fog
computing framework providing a scalable, easy-to-configure control engine that
executes placement algorithms and deploys applications across federated Fog
environments. Furthermore, MicroFog provides a sufficient abstraction over
container orchestration and dynamic microservice composition. The framework is
evaluated using multiple use cases. The results demonstrate that MicroFog is a
scalable, extensible and easy-to-configure framework that can integrate and
evaluate novel placement policies for deploying microservice-based applications
within multi-fog multi-cloud environments. We integrate multiple microservice
placement policies to demonstrate MicroFog's ability to support horizontally
scaled placement, thus reducing the application service response time up to
54%
- …