35 research outputs found

    Modeling Digital Twins of Kubernetes-Based Applications

    Get PDF
    Kubernetes provides several functions that can help service providers to deal with the management of complex container-based applications. However, most of these functions need a time-consuming and costly customization process to address service-specific requirements. The adoption of Digital Twin (DT) solutions can ease the configuration process by enabling the evaluation of multiple configurations and custom policies by means of simulation-based what-if scenario analysis. To facilitate this process, this paper proposes KubeTwin, a framework to enable the definition and evaluation of DTs of Kubernetes applications. Specifically, this work presents an in- novative simulation-based inference approach to define accurate DT models for a Kubernetes environment. We experimentally validate the proposed solution by implementing a DT model of an image recognition application that we tested under different conditions to verify the accuracy of the DT model. The soundness of these results demonstrates the validity of the KubeTwin approach and calls for further investigation

    Metaheuristics “In the Large”

    Get PDF
    Many people have generously given their time to the various activities of the MitL initiative. Particular gratitude is due to Adam Barwell, John A. Clark, Patrick De Causmaecker, Emma Hart, Zoltan A. Kocsis, Ben Kovitz, Krzysztof Krawiec, John McCall, Nelishia Pillay, Kevin Sim, Jim Smith, Thomas Stutzle, Eric Taillard and Stefan Wagner. J. Swan acknowledges the support of UK EPSRC grant EP/J017515/1 and the EU H2020 SAFIRE Factories project. P. GarciaSanchez and J. J. Merelo acknowledges the support of TIN201785727-C4-2-P by the Spanish Ministry of Economy and Competitiveness. M. Wagner acknowledges the support of the Australian Research Council grants DE160100850 and DP200102364.Following decades of sustained improvement, metaheuristics are one of the great success stories of opti- mization research. However, in order for research in metaheuristics to avoid fragmentation and a lack of reproducibility, there is a pressing need for stronger scientific and computational infrastructure to sup- port the development, analysis and comparison of new approaches. To this end, we present the vision and progress of the Metaheuristics “In the Large”project. The conceptual underpinnings of the project are: truly extensible algorithm templates that support reuse without modification, white box problem descriptions that provide generic support for the injection of domain specific knowledge, and remotely accessible frameworks, components and problems that will enhance reproducibility and accelerate the field’s progress. We argue that, via such principled choice of infrastructure support, the field can pur- sue a higher level of scientific enquiry. We describe our vision and report on progress, showing how the adoption of common protocols for all metaheuristics can help liberate the potential of the field, easing the exploration of the design space of metaheuristics.UK Research & Innovation (UKRI)Engineering & Physical Sciences Research Council (EPSRC) EP/J017515/1EU H2020 SAFIRE Factories projectSpanish Ministry of Economy and Competitiveness TIN201785727-C4-2-PAustralian Research Council DE160100850 DP20010236

    Optimization and Prediction Techniques for Self-Healing and Self-Learning Applications in a Trustworthy Cloud Continuum

    Get PDF
    The current IT market is more and more dominated by the “cloud continuum”. In the “traditional” cloud, computing resources are typically homogeneous in order to facilitate economies of scale. In contrast, in edge computing, computational resources are widely diverse, commonly with scarce capacities and must be managed very efficiently due to battery constraints or other limitations. A combination of resources and services at the edge (edge computing), in the core (cloud computing), and along the data path (fog computing) is needed through a trusted cloud continuum. This requires novel solutions for the creation, optimization, management, and automatic operation of such infrastructure through new approaches such as infrastructure as code (IaC). In this paper, we analyze how artificial intelligence (AI)-based techniques and tools can enhance the operation of complex applications to support the broad and multi-stage heterogeneity of the infrastructural layer in the “computing continuum” through the enhancement of IaC optimization, IaC self-learning, and IaC self-healing. To this extent, the presented work proposes a set of tools, methods, and techniques for applications’ operators to seamlessly select, combine, configure, and adapt computation resources all along the data path and support the complete service lifecycle covering: (1) optimized distributed application deployment over heterogeneous computing resources; (2) monitoring of execution platforms in real time including continuous control and trust of the infrastructural services; (3) application deployment and adaptation while optimizing the execution; and (4) application self-recovery to avoid compromising situations that may lead to an unexpected failure.This research was funded by the European project PIACERE (Horizon 2020 research and innovation Program, under grant agreement no 101000162)

    Microservice-based Reference Architecture for Semantics-aware Measurement Systems

    Get PDF
    Cloud technologies have become more important than ever with the rising need for scalable and distributed software systems. A pattern that is used in many such systems is a microservice-based architecture (MSA). MSAs have become a blueprint for many large companies and big software systems. In many scientific fields like energy and environmental informatics, efficient and scalable software systems with a primary focus on measurement data are a core requirement. Nowadays, there are many ways to solve research questions using data-driven approaches. Most of them have a need for large amounts of measurement data and according metadata. However, many measurement systems still follow deprecated guidelines such as monolithic architectures, classic relational database principles and are missing semantic awareness and interpretation of data. These problems and the resulting requirements are tackled by the introduction of a reference architecture with a focus on measurement systems that utilizes the principles of microservices. The thesis first presents the systematic design of the reference architecture by using the principles of Domain-driven Design (DDD). This process ensures that the reference architecture is defined in a modular and sustainable way in contrast to complex monolithic software systems. An extensive scientific analysis leads to the core parts of the concept consisting of the data management and semantics for measurement systems. Different data services define a concept for managing measurement data, according meta data and master data describing the business objects of the application implemented by using the reference architecture. Further concepts allow the reference architecture to define a way for the system to understand and interpret the data using semantic information. Lastly, the introduction of a frontend framework for dashboard applications represents an example for visualizing the data managed by the microservices

    Machine learning methods for service placement : a systematic review

    Get PDF
    With the growth of real-time and latency-sensitive applications in the Internet of Everything (IoE), service placement cannot rely on cloud computing alone. In response to this need, several computing paradigms, such as Mobile Edge Computing (MEC), Ultra-dense Edge Computing (UDEC), and Fog Computing (FC), have emerged. These paradigms aim to bring computing resources closer to the end user, reducing delay and wasted backhaul bandwidth. One of the major challenges of these new paradigms is the limitation of edge resources and the dependencies between different service parts. Some solutions, such as microservice architecture, allow different parts of an application to be processed simultaneously. However, due to the ever-increasing number of devices and incoming tasks, the problem of service placement cannot be solved today by relying on rule-based deterministic solutions. In such a dynamic and complex environment, many factors can influence the solution. Optimization and Machine Learning (ML) are two well-known tools that have been used most for service placement. Both methods typically use a cost function. Optimization is usually a way to define the difference between the predicted and actual value, while ML aims to minimize the cost function. In simpler terms, ML aims to minimize the gap between prediction and reality based on historical data. Instead of relying on explicit rules, ML uses prediction based on historical data. Due to the NP-hard nature of the service placement problem, classical optimization methods are not sufficient. Instead, metaheuristic and heuristic methods are widely used. In addition, the ever-changing big data in IoE environments requires the use of specific ML methods. In this systematic review, we present a taxonomy of ML methods for the service placement problem. Our findings show that 96% of applications use a distributed microservice architecture. Also, 51% of the studies are based on on-demand resource estimation methods and 81% are multi-objective. This article also outlines open questions and future research trends. Our literature review shows that one of the most important trends in ML is reinforcement learning, with a 56% share of research

    Ein generisches und hoch skalierbares Framework zur Automatisierung und Ausführung wissenschaftlicher Datenverarbeitungs- und Simulationsworkflows

    Get PDF
    Scientists and engineers designing and implementing complex system solutions use computational workflows for simulations, analysis, and evaluations. Along with growing system complexity, the complexity of these workflows also increases. However, without integration tools, scientists and engineers are more concerned with implementing additional interfaces to integrate software tools and model sets, which hinders their original research or engineering aims. Therefore, efficient automation and parallel computation of complex workflows are increasingly important in order to perform computational science in many scientific fields like energy and environmental informatics. When coupling heterogeneous models and other executables, a wide variety of software infrastructure requirements must be considered to ensure the compatibility of workflow components. The consistent utilization of advanced computing capabilities and the implementation of sustainable software development concepts that guarantee maximum efficiency and reusability are further issues that scientists within research organizations must regularly meet. This thesis addresses these challenges by presenting a new generic, modular, and highly scalable process operation framework for efficient coupling and automated execution of computational scientific workflows. Based on a microservice architecture utilizing container virtualization and orchestration, the framework supports the flexible and efficient parallelization of computational tasks on distributed cluster nodes. Using distributed message-oriented middleware and different I/O adapters provides a scalable and high-performance communication infrastructure for data exchange between executables, allowing the computation of workflows without requiring the adjustment of executables or the implementation of interfaces or adapters. The convenient user interface based on Apache NiFi technology ensures the simplified specification, processing, controlling, and evaluation of computational scientific workflows. Due to the framework’s high scalability and extended flexibility, use cases benefitting from parallel execution are parallelized, thereby significantly saving runtime and improving operational efficiency, especially during complex tasks like iterative grid optimization

    An interactive metaheuristic search framework for software serviceidentification from business process models

    Get PDF
    In recent years, the Service-Oriented Architecture (SOA) model of computing has become widely used and has provided efficient and agile business solutions in response to inevitable and rapid changes in business requirements. Software service identification is a crucial component in the production of a service-oriented architecture and subsequent successful software development, yet current service identification methods have limitations. For example, service identification methods are either not sufficiently comprehensive to handle the totality of service identification activities, or they lack computational support, or they pay insufficient attention to quality checks of resulting services. To address these limitations, comprehensive computationally intelligent support for software engineers when deriving software services from an organisation’s business process models shows great potential, especially when the impact of human preference on the quality of the resulting solutions can be incorporated. Accordingly, this research attempts to apply interactive metaheuristic search to effectively bridge the gap between business and SOA technology and so increase business agility.A novel, comprehensive framework is introduced that is driven by domain independent role-based business process models, and uses an interactive metaheuristic search-based service identification approach based on a genetic algorithm, while adhering to SOA principles. Termed BPMiSearch, the framework is composed of three main layers. The first layer is concerned with processing inputs from business process models into search space elements by modelling input data and presenting them at an appropriate level of granularity. The second layer focuses on identifying software services from the specified search space. The third layer refines the resulting services to map the business elements in the resulting candidate services to the corresponding service components. The proposed BPMiSearch framework has been evaluated by applying it to a healthcare domain case study, specifically, Cancer Care and Registration (CCR) business processes at the King Hussein Cancer Centre, Amman, Jordan.Experiments show that the impact of software engineer interaction on the quality of the outcomes in terms of search effectiveness, efficiency, and level of user satisfaction, is assessed. Results show that BPMiSearch has rapid search performance to positively support software engineers in the identification of services from role-based business process models while adhering to SOA principles. High-quality services are identified that might not have been arrived at manually by software engineers. Furthermore, it is found that BPMiSearch is sensitive and responsive to software engineer interaction resulting in a positive level of user trust, acceptance, and satisfaction with the candidate services

    Cybersecurity and Quantum Computing: friends or foes?

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    A Smart Charging Assistant for Electric Vehicles Considering Battery Degradation, Power Grid and User Constraints

    Get PDF
    Der Anstieg intermittierender Stromerzeugung aus erneuerbaren Energiequellen erschwert zunehmend einen effizienten und zuverlĂ€ssigen Betrieb der Versorgungsnetze. Gleichzeitig steigt die Zahl der Elektrofahrzeuge, die zum Aufladen erhebliche Mengen an elektrischer Energie benötigen, rapide an. Energie- und MobilitĂ€tssektor sind somit unweigerlich miteinander verbunden, was zur Folge hat, dass zuverlĂ€ssige ElektromobilitĂ€t von einer robusten Stromversorgung abhĂ€ngt. DarĂŒber hinaus empfinden Fahrzeugnutzer ihre individuelle MobilitĂ€t als eingeschrĂ€nkt, da Elektrofahrzeuge im Vergleich zu Fahrzeugen mit Verbrennungsmotor derzeit eine geringere Reichweite aufweisen und mehr Zeit zum Aufladen benötigen. In der vorliegenden Arbeit wird daher ein neuartiges Konzept sowie eine Softwareanwendung (Ladeassistent) vorgestellt, die den Nutzer beim Laden seines Elektrofahrzeuges unterstĂŒtzt und dabei die Interessen aller beteiligten Akteure berĂŒcksichtigt. DafĂŒr werden zunĂ€chst Gestaltungsmerkmale möglicher Softwarearchitekturen verglichen, um eine geeignete Struktur von Modulen und deren VerknĂŒpfung zu definieren. Anschließend werden anhand realer Daten sowohl Energieverbrauchs- als auch Batteriemodelle entwickelt, verbessert und validiert, welche die Fahr- und Ladeeigenschaften von Elektrofahrzeugen abbilden. Die wichtigsten BeitrĂ€ge dieser Arbeit resultieren aus der Entwicklung und Validierung der folgenden drei Kernkomponenten des Ladeassistenten. Als Erstes wird das individuelle MobilitĂ€tsverhalten der Nutzer modelliert und anhand von aufgezeichneten und halbsynthetischen Fahrdaten von Elektrofahrzeugen ausgewertet. Insbesondere wird ein neuartiger, zweistufiger Clustering-Algorithmus entwickelt, um hĂ€ufig besuchte Orte der Nutzer zu ermitteln. Anschließend werden Ensembles von Random-Forest-Modellen verwendet, um die nĂ€chsten Aufenthaltsorte und die dort typischen Parkzeiten vorherzusagen. Als Zweites wird gemischt-ganzzahlige stochastische Optimierung angewandt, um Ladestopps in einem zukĂŒnftigen Zeithorizont möglichst komfortabel und kostengĂŒnstig zu planen. Dabei wird ein graphenbasierter Algorithmus eingesetzt, um den Energiebedarf und die Eintrittswahrscheinlichkeit von MobilitĂ€tsszenarien eines Elektrofahrzeugnutzers zu quantifizieren. Zur Validierung werden zwei alternative Ladestrategien definiert und mit dem vorgeschlagenen System verglichen. Als Drittes wird ein nichtlineares Optimierungsschema entwickelt, um vorhandene Zeit- und EnergieflexibilitĂ€t in LadevorgĂ€ngen von Elektrofahrzeugen zu nutzen. Die Integration eines detaillierten Batteriemodells ermöglicht eine genaue Quantifizierung der Kosteneinsparungen aufgrund einer geringeren Batteriealterung und dynamischer Stromtarife. Anhand von Daten aus realen LadevorgĂ€ngen von Elektrofahrzeugen können EinflĂŒsse auf die RentabilitĂ€t von Vehicle-to-Grid-Anwendungen herausgearbeitet werden. Aus der Umsetzung des vorgestellten Ansatzes in einer realistischen Umgebung geht ein Architekturentwurf und ein Kommunikationskonzept fĂŒr optimierungsbasierte intelligente Ladesysteme hervor. Dabei werden weitere Herausforderungen im Zusammenhang mit standardisierter Ladekommunikation, Eingriffen der Energieversorger und Nutzerakzeptanz aufgedeckt
    corecore