939 research outputs found
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Kuksa*: Self-Adaptive Microservices in Automotive Systems
In pervasive dynamic environments, vehicles connect to other objects to send
operational data and receive updates so that vehicular applications can provide
services to users on demand. Automotive systems should be self-adaptive,
thereby they can make real-time decisions based on changing operating
conditions. Emerging modern solutions, such as microservices could improve
self-adaptation capabilities and ensure higher levels of quality performance in
many domains. We employed a real-world automotive platform called Eclipse Kuksa
to propose a framework based on microservices architecture to enhance the
self-adaptation capabilities of automotive systems for runtime data analysis.
To evaluate the designed solution, we conducted an experiment in an automotive
laboratory setting where our solution was implemented as a microservice-based
adaptation engine and integrated with other Eclipse Kuksa components. The
results of our study indicate the importance of design trade-offs for quality
requirements' satisfaction levels of each microservices and the whole system
for the optimal performance of an adaptive system at runtime
Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices
Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth
Generation (5G) mobile networks. MEC facilitates distributed cloud computing
capabilities and information technology service environment for applications
and services at the edges of mobile networks. This architectural modification
serves to reduce congestion, latency, and improve the performance of such edge
colocated applications and devices. In this paper, we demonstrate how reactive
service migration can be orchestrated for low-power MEC-enabled Internet of
Things (IoT) devices. Here, we use open-source Kubernetes as container
orchestration system. Our demo is based on traditional client-server system
from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As
the use case scenario, we post-process live video received over web real-time
communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1
handovers, demonstrating MEC-based software defined network (SDN). Now, edge
applications may reactively follow the UE within the radio access network
(RAN), expediting low-latency. The collected data is used to analyze the
benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end
(E2E) latency and power requirements of the UE are improved. We further discuss
the challenges of implementing such schemes and future research directions
therein
Monitoring Platform Evolution towards Serverless Computing for 5G and Beyond Systems
Fifth generation (5G) and beyond systems require
flexible and efficient monitoring platforms to guarantee optimal
key performance indicators (KPIs) in various scenarios. Their applicability
in Edge computing environments requires lightweight
monitoring solutions. This work evaluates different candidate
technologies to implement a monitoring platform for 5G and
beyond systems in these environments. For monitoring data plane
technologies, we evaluate different virtualization technologies,
including bare metal servers, virtual machines, and orchestrated
containers. We show that containers not only offer superior
flexibility and deployment agility, but also allow obtaining better
throughput and latency. In addition, we explore the suitability
of the Function-as-a-Service (FaaS) serverless paradigm for
deploying the functions used to manage the monitoring platform.
This is motivated by the event oriented nature of those functions,
designed to set up the monitoring infrastructure for newly
created services. When the FaaS warm start mode is used,
the platform gives users the perception of resources that are
always available. When a cold start mode is used, containers
running the application"s modules are automatically destroyed
when the application is not in use. Our analysis compares both
of them with the standard deployment of microservices. The
experimental results show that the cold start mode produces
a significant latency increase, along with potential instabilities.
For this reason, its usage is not recommended despite the
potential savings of computing resources. Conversely, when the
warm start mode is used for executing configuration tasks
of monitoring infrastructure, it can provide similar execution
times to a microservice-based deployment. In addition, the FaaS
approach significantly simplifies the code logic in comparison
with microservices, reducing lines of code to less than 38%, thus
reducing development time. Thus, FaaS in warm start mode
represents the best candidate technology to implements such
management functions.This work has been supported by EC H2020 5GPPP projects 5G-EVE and 5GROWTH under grant agreements No. 815974 and 856709, respectively
When IoT Meets DevOps: Fostering Business Opportunities
The Internet of Things (IoT) is the new digital revolution for the near-future society, the second after the creation of the Internet itself. The software industry is converging towards the large-scale deployment of IoT devices and services, and there’s broad support from the business environment for this engineering vision. The Development and Operations (DevOps) project management methodology, with continuous delivery and integration, is the preferred approach for achieving and deploying applications to all levels of the IoT architecture. In this paper we also discuss the promising trend of associating devices with microservices, which are further encapsulated into functional packages called containers. Docker is considered the market leader in container-based service delivery, though other important software companies are promoting this concept as part of the technology solution for their IoT customers. In the experimental section we propose a three-layer IoT model, business-oriented, and distributed over multiple cloud environments, comprising the Physical, Fog/Edge, and Application layers.
Keywords: Internet-of-Things, software technologies, project management, business environment Heading
Serving deep learning models in a serverless platform
Serverless computing has emerged as a compelling paradigm for the development
and deployment of a wide range of event based cloud applications. At the same
time, cloud providers and enterprise companies are heavily adopting machine
learning and Artificial Intelligence to either differentiate themselves, or
provide their customers with value added services. In this work we evaluate the
suitability of a serverless computing environment for the inferencing of large
neural network models. Our experimental evaluations are executed on the AWS
Lambda environment using the MxNet deep learning framework. Our experimental
results show that while the inferencing latency can be within an acceptable
range, longer delays due to cold starts can skew the latency distribution and
hence risk violating more stringent SLAs
- …