283 research outputs found
Monitoring in Hybrid Cloud-Edge Environments
The increasing number of mobile and IoT(Internet of Things) devices accessing cloud
services contributes to a surge of requests towards the Cloud and consequently, higher
latencies. This is aggravated by the possible congestion of the communication networks
connecting the end devices and remote cloud datacenters, due to the large data volume
generated at the Edge (e.g. in the domains of smart cities, smart cars, etc.). One solution
for this problem is the creation of hybrid Cloud/Edge execution platforms composed of
computational nodes located in the periphery of the system, near data producers and consumers,
as a way to complement the cloud resources. These edge nodes offer computation
and data storage resources to accommodate local services in order to ensure rapid responses
to clients (enhancing the perceived quality of service) and to filter data, reducing
the traffic volume towards the Cloud. Usually these nodes (e.g. ISP access points and onpremises
servers) are heterogeneous, geographically distributed, and resource-restricted
(including in communication networks), which increase their management’s complexity.
At the application level, the microservices paradigm, represented by applications composed
of small, loosely coupled services, offers an adequate and flexible solution to design
applications that may explore the limited computational resources in the Edge.
Nevertheless, the inherent difficult management of microservices within such complex
infrastructure demands an agile and lightweight monitoring system that takes into
account the Edge’s limitations, which goes behind traditional monitoring solutions at the
Cloud. Monitoring in these new domains is not a simple process since it requires supporting
the elasticity of the monitored system, the dynamic deployment of services and,
moreover, doing so without overloading the infrastructure’s resources with its own computational
requirements and generated data. Towards this goal, this dissertation presents
an hybrid monitoring architecture where the heavier (resource-wise) components reside
in the Cloud while the lighter (computationally less demanding) components reside in
the Edge. The architecture provides relevant monitoring functionalities such as metrics’
acquisition, their analysis and mechanisms for real-time alerting. The objective is the efficient use of computational resources in the infrastructure while guaranteeing an agile
delivery of monitoring data where and when it is needed.Tem-se vindo a verificar um aumento significativo de dispositivos móveis e do domÃnio
IoT(Internet of Things) em áreas emergentes como Smart Cities, Smart Cars, etc., que
fazem pedidos a serviços localizados normalmente na Cloud, muitas vezes a partir de
locais remotos. Como consequência, prevê-se um aumento da latência no processamento
destes pedidos, que poderá ser agravado pelo congestionamento dos canais de comunicação,
da periferia até aos centros de dados. Uma forma de solucionar este problema
passa pela criação de sistemas hÃbridos Cloud/Edge, compostos por nós computacionais
que estão localizados na periferia do sistema, perto dos produtores e consumidores de
dados, complementando assim os recursos computacionais da Cloud. Os nós da Edge
permitem não só alojar dados e computações, garantindo uma resposta mais rápida aos
clientes e uma melhor qualidade do serviço, como também permitem filtrar alguns dos
dados, evitando deste modo transferências de dados desnecessárias para o núcleo do sistema.
Contudo, muitos destes nós (e.g. pontos de acesso, servidores proprietários) têm
uma capacidade limitada, são bastante heterogéneos e/ou encontram-se espalhados geograficamente,
o que dificulta a gestão dos recursos. O paradigma de micro-serviços,
representado por aplicações compostas por serviços de reduzida dimensão, desacoplados
na sua funcionalidade e que comunicam por mensagens, fornece uma solução adequada
para explorar os recursos computacionais na periferia.
No entanto, o mapeamento adequado dos micro-serviços na infra-estrutura, além de
ser complexo, é difÃcil de gerir e requer um sistema de monitorização ligeiro e ágil, que
considere as capacidades limitadas da infra-estrutura de suporte na periferia. A monitorização
não é um processo simples pois deve possibilitar a elasticidade do sistema, tendo
em conta as adaptações de "deployment", e sem sobrecarregar os recursos computacionais
ou de rede. Este trabalho apresenta uma arquitectura de monitorização hÃbrida, com
componentes de maior complexidade na Cloud e componentes mais simples na Edge. A
arquitectura fornece funcionalidades importantes de monitorização, como a recolha de métricas variadas, a sua análise e alertas em tempo real. O objetivo é rentabilizar os recursos
computacionais garantindo a entrega dos dados mais relevantes quando necessário
CoScal: Multi-faceted Scaling of Microservices with Reinforcement Learning
The emerging trend towards moving from monolithic applications to microservices has raised new performance challenges in cloud computing environments. Compared with traditional monolithic applications, the microservices are lightweight, fine-grained, and must be executed in a shorter time. Efficient scaling approaches are required to ensure microservices’ system performance under diverse workloads with strict Quality of Service (QoS) requirements and optimize resource provisioning. To solve this problem, we investigate the trade-offs between the dominant scaling techniques, including horizontal scaling, vertical scaling, and brownout in terms of execution cost and response time. We first present a prediction algorithm based on gradient recurrent units to accurately predict workloads assisting in scaling to achieve efficient scaling. Further, we propose a multi-faceted scaling approach using reinforcement learning called CoScal to learn the scaling techniques efficiently. The proposed CoScal approach takes full advantage of data-driven decisions and improves the system performance in terms of high communication cost and delay. We validate our proposed solution by implementing a containerized microservice prototype system and evaluated with two microservice applications. The extensive experiments demonstrate that CoScal reduces response time by 19%-29% and decreases the connection time of services by 16% when compared with the state-of-the-art scaling techniques for Sock Shop application. CoScal can also improve the number of successful transactions with 6%-10% for Stan’s Robot Shop application
6G White Paper on Edge Intelligence
In this white paper we provide a vision for 6G Edge Intelligence. Moving towards 5G and beyond the future 6G networks, intelligent solutions utilizing data-driven machine learning and artificial intelligence become crucial for several real-world applications including but not limited to, more efficient manufacturing, novel personal smart device environments and experiences, urban computing and autonomous traffic settings. We present edge computing along with other 6G enablers as a key component to establish the future 2030 intelligent Internet technologies as shown in this series of 6G White Papers. In this white paper, we focus in the domains of edge computing infrastructure and platforms, data and edge network management, software development for edge, and real-time and distributed training of ML/AI algorithms, along with security, privacy, pricing, and end-user aspects. We discuss the key enablers and challenges and identify the key research questions for the development of the Intelligent Edge services. As a main outcome of this white paper, we envision a transition from Internet of Things to Intelligent Internet of Intelligent Things and provide a roadmap for development of 6G Intelligent Edge
Adoption of microservices in industrial information systems: a systematic literature review
The internet, digitalization and globalization have transformed customer expectations and the way business is done. Product life cycles have shortened, products need to be customizable, and the production needs to be scalable. These changes reflect also to the industrial operations. Quick technological advancements have increased the role of software in industrial facilities. The software in use has to enable untraditional flexibility, interoperability and scalability.
Microservices based architecture has been seen as the state of the art way for developing flexible, interoperable and scalable software. Microservices have been applied to cloud native applications for consumers with enormous success. The goal of this thesis is to analyze how to adopt microservices to indstrial information systems. General information and characteristics of microservices are provided as background information and a systematic literature review is conducted to answer the research problem. Material for the systematic literature review was found from multiple digital libraries and 17 scientific papers matched the set inclusion cirteria. The material was then analyzed with an extensively documentated method.
The thesis brought together the available publications on the topic. Guidelines for adopting microservices to industrial information systems were derived based on the analysis. Real time applications need special attention when using microservices architecture, the developers need to use proper tools for the tasks, and the developers and users need to be properly introduced to service-oriented systems. Based on this thesis microservices seems like a suitable approach for developing flexible industrial information systems, which satisfy the new business requirements
Enhancing and integration of security testing in the development of a microservices environment
In the last decade, web application development is moving toward the adoption of Service-Oriented Architecture (SOA). Accordingly to this trend, Software as a Service (SaaS) and Serverless providers are embracing DevOps with the latest tools to facilitate the creation, maintenance and scalability of microservices system configuration.
Even if within this trend, security is still an open point that is too often underestimated. Many companies are still thinking about security as a set of controls that have to be checked before the software is used in production. In reality, security needs to be taken into account all along the entire Software Development Lifecycle (SDL).
In this thesis, state of the art security recommendations for microservice architecture are reviewed, and useful improvements are given. The main target is for secure to become integrated better into a company workflow, increasing security awareness and simplifying the integration of security measures throughout the SDL.
With this background, best practices and recommendations are compared with what companies are currently doing in securing their service-oriented infrastructures. The assumption that there still is much ground to cover security-wise still standing. Lastly, a small case study is presented and used as proof of how small and dynamic startups can be the front runners of high cybersecurity standards. The results of the analysis show that it is easier to integrate up-to-date security measures in a small company
Towards the Softwarization of Content Delivery Networks for Component and Service Provisioning
Content Delivery Networks (CDNs) are common systems nowadays to deliver content (e.g. Web pages, videos) to geographically distributed end-users over the Internet. Leveraging geographically distributed replica servers, CDNs can easily help to meet the required Quality of Service (QoS) in terms of content quality and delivery time. Recently, the dominating surge in demand for rich and premium content has encouraged CDN providers to provision value-added services (VAS) in addition to the basic services. While video streaming is an example of basic CDN services, VASs cover more advanced services such as media management.
Network softwarization relies on programmability properties to facilitate the deployment and management of network functionalities. It brings about several benefits such as scalability, adaptability, and flexibility in the provisioning of network components and services. Technologies, such as Network Functions Virtualization (NFV) and Software Defined Networking (SDN) are its key enablers.
There are several challenges related to the component and service provisioning in CDNs.
On the architectural front, a first challenge is the extension of the CDN coverage by on-the-fly deployment of components in new locations and another challenge is the upgrade of CDN components in a timely manner, because traditionally, they are deployed statically as physical building blocks. Yet, another architectural challenge is the dynamic composition of required middle-boxes for CDN VAS provisioning, because existing SDN frameworks lack features to support the dynamic chaining of the application-level middle-boxes that are essential building blocks of CDN VASs. On the algorithmic front, a challenge is the optimal placement of CDN VAS middle-boxes in a dynamic manner as CDN VASs have an unknown end-point prior to placement.
This thesis relies on network softwarization to address key architectural and algorithmic challenges related to component and service provisioning in CDNs. To tackle the first challenge, we propose an architecture based on NFV and microservices for an on-the-fly CDN component provisioning including deployment and upgrading. In order to address the second challenge, we propose an architecture for on-the-fly provisioning of VASs in CDNs using NFV and SDN technologies. The proposed architecture reduces the content delivery time by introducing features for in-network caching. For the algorithmic challenge, we study and model the problem of dynamic placement and chaining of middle-boxes (implemented as Virtual Network Function (VNF)) for CDN VASs as an Integer Linear Programming (ILP) problem with the objective of minimizing the cost while respecting the QoS. To increase the problem tractability, we propose and validate some heuristics
DevOps in practice : A multiple case study of five companies
Context: DevOps is considered important in the ability to frequently and reliably update a system in operational state. DevOps presumes cross-functional collaboration and automation between software development and operations. DevOps adoption and implementation in companies is non-trivial due to required changes in technical, organisational and cultural aspects. Objectives: This exploratory study presents detailed descriptions of how DevOps is implemented in practice. The context of our empirical investigation is web application and service development in small and medium sized companies. Method: A multiple-case study was conducted in five different development contexts with successful DevOps implementations since its benefits, such as quick releases and minimum deployment errors, were achieved. Data was mainly collected through interviews with 26 practitioners and observations made at the companies. Data was analysed by first coding each case individually using a set of predefined themes and thereafter perform a cross-case synthesis. Results: Our analysis yielded some of the following results: (I) software development team attaining ownership and responsibility to deploy software changes in production is crucial in DevOps. (ii) toolchain usage and support in deployment pipeline activities accelerates the delivery of software changes, bug fixes and handling of production incidents. (ii) the delivery speed to production is affected by context factors, such as manual approvals by the product owner (iii) steep learning curve for new skills is experienced by both software developers and operations staff, who also have to cope with working under pressure. Conclusion: Our findings contributes to the overall understanding of DevOps concept, practices and its perceived impacts, particularly in small and medium sized companies. We discuss two practical implications of the results.Peer reviewe
HPM-Frame: A Decision Framework for Executing Software on Heterogeneous Platforms
Heterogeneous computing is one of the most important computational solutions
to meet rapidly increasing demands on system performance. It typically allows
the main flow of applications to be executed on a CPU while the most
computationally intensive tasks are assigned to one or more accelerators, such
as GPUs and FPGAs. The refactoring of systems for execution on such platforms
is highly desired but also difficult to perform, mainly due the inherent
increase in software complexity. After exploration, we have identified a
current need for a systematic approach that supports engineers in the
refactoring process -- from CPU-centric applications to software that is
executed on heterogeneous platforms. In this paper, we introduce a decision
framework that assists engineers in the task of refactoring software to
incorporate heterogeneous platforms. It covers the software engineering
lifecycle through five steps, consisting of questions to be answered in order
to successfully address aspects that are relevant for the refactoring
procedure. We evaluate the feasibility of the framework in two ways. First, we
capture the practitioner's impressions, concerns and suggestions through a
questionnaire. Then, we conduct a case study showing the step-by-step
application of the framework using a computer vision application in the
automotive domain.Comment: Manuscript submitted to the Journal of Systems and Softwar
- …