2,312 research outputs found

    Hybrid clouds for data-Intensive, 5G-Enabled IoT applications: an overview, key issues and relevant architecture

    Get PDF
    Hybrid cloud multi-access edge computing (MEC) deployments have been proposed as efficient means to support Internet of Things (IoT) applications, relying on a plethora of nodes and data. In this paper, an overview on the area of hybrid clouds considering relevant research areas is given, providing technologies and mechanisms for the formation of such MEC deployments, as well as emphasizing several key issues that should be tackled by novel approaches, especially under the 5G paradigm. Furthermore, a decentralized hybrid cloud MEC architecture, resulting in a Platform-as-a-Service (PaaS) is proposed and its main building blocks and layers are thoroughly described. Aiming to offer a broad perspective on the business potential of such a platform, the stakeholder ecosystem is also analyzed. Finally, two use cases in the context of smart cities and mobile health are presented, aimed at showing how the proposed PaaS enables the development of respective IoT applications.Peer ReviewedPostprint (published version

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    RADON: Rational decomposition and orchestration for serverless computing

    Get PDF
    Emerging serverless computing technologies, such as function as a service (FaaS), enable developers to virtualize the internal logic of an application, simplifying the management of cloud-native services and allowing cost savings through billing and scaling at the level of individual functions. Serverless computing is therefore rapidly shifting the attention of software vendors to the challenge of developing cloud applications deployable on FaaS platforms. In this vision paper, we present the research agenda of the RADON project (http://radon-h2020.eu), which aims to develop a model-driven DevOps framework for creating and managing applications based on serverless computing. RADON applications will consist of fine-grained and independent microservices that can efficiently and optimally exploit FaaS and container technologies. Our methodology strives to tackle complexity in designing such applications, including the solution of optimal decomposition, the reuse of serverless functions as well as the abstraction and actuation of event processing chains, while avoiding cloud vendor lock-in through models

    Integration of MLOps with IoT edge

    Get PDF
    Abstract. Edge Computing and Machine Learning have become increasingly vital in today’s digital landscape. Edge computing brings computational power closer to the data source enabling reduced latency and bandwith, increased privacy, and real-time decision-making. Running Machine Learning models on edge devices further enhances these advantages by reducing the reliance on cloud. This empowers industries such as transport, healthcare, manufacturing, to harness the full potential of Machine Learning. MLOps, or Machine Learning Operations play a major role streamlining the deployment, monitoring, and management of Machine Learning models in production. With MLOps, organisations can achieve faster model iteration, reduced deployment time, improved collaboration with developers, optimised performance, and ultimately meaningful business outcomes. Integrating MLOps with edge devices poses unique challenges. Overcoming these challenges requires careful planning, customised deployment strategies, and efficient model optimization techniques. This thesis project introduces a set of tools that enable the integration of MLOps practices with edge devices. The solution consists of two sets of tools: one for setting up infrastructure within edge devices to be able to receive, monitor, and run inference on Machine Learning models, and another for MLOps pipelines to package models to be compatible with the inference and monitoring components of the respective edge devices. This platform was evaluated by obtaining a public dataset used for predicting the breakdown of Air Pressure Systems in trucks, which is an ideal use-case for running ML inference on the edge, and connecting MLOps pipelines with edge devices.. A simulation was created using the data in order to control the volume of data flow into edge devices. Thereafter, the performance of the platform was tested against the scenario created by the simulation script. Response time and CPU usage in different components were the metrics that were tested. Additionally, the platform was evaluated against a set of commercial and open source tools and services that serve similar purposes. The overall performance of this solution matches that of already existing tools and services, while allowing end users setting up Edge-MLOps infrastructure the complete freedom to set up their system without completely relying on third party licensed software.MLOps-integraatio reunalaskennan tarpeisiin. Tiivistelmä. Reunalaskennasta (Edge Computing) ja koneoppimisesta on tullut yhä tärkeämpiä nykypäivän digitaalisessa ympäristössä. Reunalaskenta tuo laskentatehon lähemmäs datalähdettä, mikä mahdollistaa reaaliaikaisen päätöksenteon ja pienemmän viiveen. Koneoppimismallien suorittaminen reunalaitteissa parantaa näitä etuja entisestään vähentämällä riippuvuutta pilvipalveluista. Näin esimerkiksi liikenne-, terveydenhuolto- ja valmistusteollisuus voivat hyödyntää koneoppimisen koko potentiaalin. MLOps eli Machine Learning Operations on merkittävässä asemassa tehostettaessa ML -mallien käyttöönottoa, seurantaa ja hallintaa tuotannossa. MLOpsin avulla organisaatiot voivat nopeuttaa mallien iterointia, lyhentää käyttöönottoaikaa, parantaa yhteistyötä kehittäjien kesken, optimoida laskennan suorituskykyä ja lopulta saavuttaa merkityksellisiä liiketoimintatuloksia. MLOpsin integroiminen reunalaitteisiin asettaa ainutlaatuisia haasteita. Näiden haasteiden voittaminen edellyttää huolellista suunnittelua, räätälöityjä käyttöönottostrategioita ja tehokkaita mallien optimointitekniikoita. Tässä opinnäytetyöhankkeessa esitellään joukko työkaluja, jotka mahdollistavat MLOps-käytäntöjen integroinnin reunalaitteisiin. Ratkaisu koostuu kahdesta työkalukokonaisuudesta: toinen infrastruktuurin perustamisesta reunalaitteisiin, jotta ne voivat vastaanottaa, valvoa ja suorittaa päätelmiä koneoppimismalleista, ja toinen MLOps “prosesseista”, joilla mallit paketoidaan yhteensopiviksi vastaavien reunalaitteiden komponenttien kanssa. Ratkaisun toimivuutta arvioitiin avoimeen dataan perustuvalla käyttötapauksella. Datan avulla luotiin simulaatio, jonka tarkoituksena oli mahdollistaa reunalaitteisiin suuntautuvan datatovirran kontrollonti. Tämän jälkeen suorituskykyä testattiin simuloinnin luoman skenaarion avulla. Testattaviin mittareihin kuuluivat muun muassa suorittimen käyttö. Lisäksi ratkaisua arvioitiin vertaamalla sitä olemassa oleviin kaupallisiin ja avoimen lähdekoodin alustoihin. Tämän ratkaisun kokonaissuorituskyky vastaa jo markkinoilla olevien työkalujen ja palvelujen suorituskykyä. Ratkaisu antaa samalla loppukäyttäjille mahdollisuuden perustaa Edge-MLOps-infrastruktuuri ilman riippuvuutta kolmannen osapuolen lisensoiduista ohjelmistoista

    An artificial intelligence-based collaboration approach in industrial IoT manufacturing : key concepts, architectural extensions and potential applications

    Get PDF
    The digitization of manufacturing industry has led to leaner and more efficient production, under the Industry 4.0 concept. Nowadays, datasets collected from shop floor assets and information technology (IT) systems are used in data-driven analytics efforts to support more informed business intelligence decisions. However, these results are currently only used in isolated and dispersed parts of the production process. At the same time, full integration of artificial intelligence (AI) in all parts of manufacturing systems is currently lacking. In this context, the goal of this manuscript is to present a more holistic integration of AI by promoting collaboration. To this end, collaboration is understood as a multi-dimensional conceptual term that covers all important enablers for AI adoption in manufacturing contexts and is promoted in terms of business intelligence optimization, human-in-the-loop and secure federation across manufacturing sites. To address these challenges, the proposed architectural approach builds on three technical pillars: (1) components that extend the functionality of the existing layers in the Reference Architectural Model for Industry 4.0; (2) definition of new layers for collaboration by means of human-in-the-loop and federation; (3) security concerns with AI-powered mechanisms. In addition, system implementation aspects are discussed and potential applications in industrial environments, as well as business impacts, are presented
    corecore