1,681 research outputs found
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Secure FaaS orchestration in the fog: how far are we?
AbstractFunction-as-a-Service (FaaS) allows developers to define, orchestrate and run modular event-based pieces of code on virtualised resources, without the burden of managing the underlying infrastructure nor the life-cycle of such pieces of code. Indeed, FaaS providers offer resource auto-provisioning, auto-scaling and pay-per-use billing at no costs for idle time. This makes it easy to scale running code and it represents an effective and increasingly adopted way to deliver software. This article aims at offering an overview of the existing literature in the field of next-gen FaaS from three different perspectives: (i) the definition of FaaS orchestrations, (ii) the execution of FaaS orchestrations in Fog computing environments, and (iii) the security of FaaS orchestrations. Our analysis identify trends and gaps in the literature, paving the way to further research on securing FaaS orchestrations in Fog computing landscapes
mF2C: Towards a coordinated management of the IoT-fof-cloud continuum
Fog computing enables location dependent resource allocation and low latency services, while fostering novel market and business opportunities in the cloud sector. Aligned to this trend, we refer to Fog-tocloud (F2C) computing system as a new pool of resources, set into a layered and hierarchical model, intended to ease the entire fog and cloud resources management and coordination. The H2020 project mF2C aims at designing, developing and testing a first attempt for a real F2C architecture.
This document outlines the architecture and main functionalities of the management framework designed in the mF2C project to coordinate the execution of services in the envisioned set of heterogeneous anddistributed resources.Postprint (author's final draft
Latency-Sensitive Web Service Workflows: A Case for a Software-Defined Internet
The Internet, at large, remains under the control of service providers and
autonomous systems. The Internet of Things (IoT) and edge computing provide an
increasing demand and potential for more user control for their web service
workflows. Network Softwarization revolutionizes the network landscape in
various stages, from building, incrementally deploying, and maintaining the
environment. Software-Defined Networking (SDN) and Network Functions
Virtualization (NFV) are two core tenets of network softwarization. SDN offers
a logically centralized control plane by abstracting away the control of the
network devices in the data plane. NFV virtualizes dedicated hardware
middleboxes and deploys them on top of servers and data centers as network
functions. Thus, network softwarization enables efficient management of the
system by enhancing its control and improving the reusability of the network
services. In this work, we propose our vision for a Software-Defined Internet
(SDI) for latency-sensitive web service workflows. SDI extends network
softwarization to the Internet-scale, to enable a latency-aware user workflow
execution on the Internet.Comment: Accepted for Publication at The Seventh International Conference on
Software Defined Systems (SDS-2020
A Time-Sensitive IoT Data Analysis Framework
This paper proposes a Time-Sensitive IoT Data Analysis (TIDA) framework that meets the time-bound requirements of time-sensitive IoT applications. The proposed framework includes a novel task sizing and dynamic distribution technique that performs the following: 1) measures the computing and network resources required by the data analysis tasks of a time-sensitive IoT application when executed on available IoT devices, edge computers and cloud, and 2) distributes the data analysis tasks in a way that it meets the time-bound requirement of the IoT application. The TIDA framework includes a TIDA platform that implements the above techniques using Microsoftâs Orleans framework. The paper also presents an experimental evaluation that validates the TIDA frameworkâs ability to meet the time-bound requirements of IoT applications in the smart cities domain. Evaluation results show that TIDA outperforms traditional cloud-based IoT data processing approaches in meeting IoT application time-bounds and reduces the total IoT data analysis execution time by 46.96%
- âŠ