2,797 research outputs found

    Multi-Criteria Decision-Making Approach for Container-based Cloud Applications: The SWITCH and ENTICE Workbenches

    Get PDF
    Many emerging smart applications rely on the Internet of Things (IoT) to provide solutions to time-critical problems. When building such applications, a software engineer must address multiple Non-Functional Requirements (NFRs), including requirements for fast response time, low communication latency, high throughput, high energy efficiency, low operational cost and similar. Existing modern container-based software engineering approaches promise to improve the software lifecycle; however, they fail short of tools and mechanisms for NFRs management and optimisation. Our work addresses this problem with a new decision-making approach based on a Pareto Multi-Criteria optimisation. By using different instance configurations on various geo-locations, we demonstrate the suitability of our method, which narrows the search space to only optimal instances for the deployment of the containerised microservice.This solution is included in two advanced software engineering environments, the SWITCH workbench, which includes an Interactive Development Environment (IDE) and the ENTICE Virtual Machine and container images portal. The developed approach is particularly useful when building, deploying and orchestrating IoT applications across multiple computing tiers, from Edge-Cloudlet to Fog-Cloud data centres

    Synergizing domain expertise with self-awareness in software systems:a patternized architecture guideline

    Get PDF
    To promote engineering self-aware and self-adaptive software systems in a reusable manner, architectural patterns and the related methodology provide an unified solution to handle the recurring problems in the engineering process. However, in existing patterns and methods, domain knowledge and engineers' expertise that is built over time are not explicitly linked to the self-aware processes. This linkage is important, as the knowledge is a valuable asset for the related problems and its absence would cause unnecessary overhead, possibly misleading results and unwise waste of the tremendous benefit that could have been brought by the domain expertise. This paper highlights the importance of synergizing domain expertise and the self-awareness to enable better self-adaptation in software systems, relying on well-defined expertise representation, algorithms and techniques. In particular, we present a holistic framework of notions, enriched patterns and methodology, dubbed DBASES, that offers a principled guideline for the engineers to perform difficulty and benefit analysis on possible synergies, in an attempt to keep "engineers-in-the-loop". Through three tutorial case studies, we demonstrate how DBASES can be applied in different domains, within which a carefully selected set of candidates with different synergies can be used for quantitative investigation, providing more informed decisions of the design choices.Comment: Accepted manuscript to the Proceedings of the IEEE. Please use the following citation: Tao Chen, Rami Bahsoon, and Xin Yao. 2020. Synergizing Domain Expertise with Self-Awareness in Software Systems: A Patternized Architecture Guideline. Proc. IEEE, in pres

    Developing a European grid infrastructure for cancer research: vision, architecture and services

    Get PDF
    Life sciences are currently at the centre of an information revolution. The nature and amount of information now available opens up areas of research that were once in the realm of science fiction. During this information revolution, the data-gathering capabilities have greatly surpassed the data-analysis techniques. Data integration across heterogeneous data sources and data aggregation across different aspects of the biomedical spectrum, therefore, is at the centre of current biomedical and pharmaceutical R&D

    Combinatorial Auction-based Mechanisms for Composite Web Service Selection

    Get PDF
    Composite service selection presents the opportunity for the rapid development of complex applications using existing web services. It refers to the problem of selecting a set of web services from a large pool of available candidates to logically compose them to achieve value-added composite services. The aim of service selection is to choose the best set of services based on the functional and non-functional (quality related) requirements of a composite service requester. The current service selection approaches mostly assume that web services are offered as single independent entities; there is no possibility for bundling. Moreover, the current research has mainly focused on solving the problem for a single composite service. There is a limited research to date on how the presence of multiple requests for composite services affects the performance of service selection approaches. Addressing these two aspects can significantly enhance the application of composite service selection approaches in the real-world. We develop new approaches for the composite web service selection problem by addressing both the bundling and multiple requests issues. In particular, we propose two mechanisms based on combinatorial auction models, where the provisioning of multiple services are auctioned simultaneously and service providers can bid to offer combinations of web services. We mapped these mechanisms to Integer Linear Programing models and conducted extensive simulations to evaluate them. The results of our experimentation show that bundling can lead to cost reductions compared to when services are offered independently. Moreover, the simultaneous consideration of a set of requests enhances the success rate of the mechanism in allocating services to requests. By considering all composite service requests at the same time, the mechanism achieves more homogenous prices which can be a determining factor for the service requester in choosing the best composite service selection mechanism to deploy

    Runtime Adaptation of Scientific Service Workflows

    Get PDF
    Software landscapes are rather subject to change than being complete after having been built. Changes may be caused by a modified customer behavior, the shift to new hardware resources, or otherwise changed requirements. In such situations, several challenges arise. New architectural models have to be designed and implemented, existing software has to be integrated, and, finally, the new software has to be deployed, monitored, and, where appropriate, optimized during runtime under realistic usage scenarios. All of these situations often demand manual intervention, which causes them to be error-prone. This thesis addresses these types of runtime adaptation. Based on service-oriented architectures, an environment is developed that enables the integration of existing software (i.e., the wrapping of legacy software as web services). A workflow modeling tool that aims at an easy-to-use approach by separating the role of the workflow expert and the role of the domain expert. After the development of workflows, tools that observe the executing infrastructure and perform automatic scale-in and scale-out operations are presented. Infrastructure-as-a-Service providers are used to scale the infrastructure in a transparent and cost-efficient way. The deployment of necessary middleware tools is automatically done. The use of a distributed infrastructure can lead to communication problems. In order to keep workflows robust, these exceptional cases need to treated. But, in this way, the process logic of a workflow gets mixed up and bloated with infrastructural details, which yields an increase in its complexity. In this work, a module is presented that can deal automatically with infrastructural faults and that thereby allows to keep the separation of these two layers. When services or their components are hosted in a distributed environment, some requirements need to be addressed at each service separately. Although techniques as object-oriented programming or the usage of design patterns like the interceptor pattern ease the adaptation of service behavior or structures. Still, these methods require to modify the configuration or the implementation of each individual service. On the other side, aspect-oriented programming allows to weave functionality into existing code even without having its source. Since the functionality needs to be woven into the code, it depends on the specific implementation. In a service-oriented architecture, where the implementation of a service is unknown, this approach clearly has its limitations. The request/response aspects presented in this thesis overcome this obstacle and provide a SOA-compliant and new methods to weave functionality into the communication layer of web services. The main contributions of this thesis are the following: Shifting towards a service-oriented architecture: The generic and extensible Legacy Code Description Language and the corresponding framework allow to wrap existing software, e.g., as web services, which afterwards can be composed into a workflow by SimpleBPEL without overburdening the domain expert with technical details that are indeed handled by a workflow expert. Runtime adaption: Based on the standardized Business Process Execution Language an automatic scheduling approach is presented that monitors all used resources and is able to automatically provision new machines in case a scale-out becomes necessary. If the resource's load drops, e.g., because of less workflow executions, a scale-in is also automatically performed. The scheduling algorithm takes the data transfer between the services into account in order to prevent scheduling allocations that eventually increase the workflow's makespan due to unnecessary or disadvantageous data transfers. Furthermore, a multi-objective scheduling algorithm that is based on a genetic algorithm is able to additionally consider cost, in a way that a user can define her own preferences rising from optimized execution times of a workflow and minimized costs. Possible communication errors are automatically detected and, according to certain constraints, corrected. Adaptation of communication: The presented request/response aspects allow to weave functionality into the communication of web services. By defining a pointcut language that only relies on the exchanged documents, the implementation of services must neither be known nor be available. The weaving process itself is modeled using web services. In this way, the concept of request/response aspects is naturally embedded into a service-oriented architecture

    The quality-aware service selection problem: an adaptive evolutionary approach

    Get PDF
    Die QualitĂ€t der Serviceerbringung (kurz QoS) ist ein wichtiger Aspekt in verteilten, Service-orientierten Systemen. Wenn mehrere Implementierungen einer FunktionalitĂ€t koexistieren, kann die Wahl eines konkreten Services aufgrund von QoS-Aspekten getroffen werden. Leistung, VerfĂŒgbarkeit und Kosten sind Beispiele fĂŒr QoS-Attribute eines Services. In der vorliegenden Dissertation werden Aspekte dieses Selektionsproblems anhand eines konkreten, Service-orientieren Systems vertieft. Es handelt sich dabei um das TAG-System in ATLAS, einem Hochenergiephysikexperiment am CERN, der EuropĂ€ischen Organisation fĂŒr Kernforschung. Die Daten und Services des TAG-Systems sind weltweit verteilt und mĂŒssen auf Anfrage selektiert und zu einem Workflow zusammengesetzt werden. Die Optimierung wird aus zwei unterschiedlichen Blickwinkeln. Die Selektion wird als ein dynamisches Pfadoptimierungsproblem unter Nebenbedingungen modelliert, wodurch QoS-Attribute sowohl der Knoten (Services) als auch der Kanten (Netzwerk) berĂŒcksichtigt werden können. Dynamische Aspekte des verteilten sind in der Problemformulierung integriert, da sie eine spezifische Herausforderung und Anforderung an Lösungsalgorithmen stellen. FĂŒr die dynamische Pareto-Optimierung von Serviceselektionsproblemen wird im Rahmen dieser Arbeit ein Optimierungsansatz mit einem genetischen Algorithmus prĂ€sentiert, der ĂŒber einen persistenten Speicher von frĂŒheren Lösungen sowie eine automatische Adaptierung der Mutationsrate eine effiziente Anpassung an das sich stĂ€ndig verĂ€ndernde System gewĂ€hrleistet. Eine Ontologie der Systemkomponenten sowie deren QoS-Attribute bildet die Basis fĂŒr die Optimierung. Der Ansatz wird im Rahmen der Dissertation hinsichtlich der QualitĂ€t der erzielten Lösungen, der Adaptierung an Ă€nderungen sowie der Laufzeit evaluiert. Teile des Ansatzes wurden schließ lich in das TAG-System integriert und darin evaluiert.Quality of Service (QoS) is an important aspect in distributed, service-oriented systems. When several concrete services exist that implement the same functionality, the choice of a service instance among many can be made based on QoS considerations, objectives and constraints. Typically considered properties are performance, availability, and costs. In this thesis, aspects of the QoS-aware service selection problem are studied in the context of a distributed, service-oriented system from ATLAS, a high-energy physics experiment at CERN, the European Organization for Nuclear Research. In this so-called TAG system, data and modular services are distributed world-wide and need to be selected and composed on the fly, as a user starts a request. There are two conflicting optimization viewpoints. The service selection is modeled as a dynamic multi-constrained optimal path problem, which allows considering QoS attributes of service instances and of the network. The dynamic aspects of the system are included in the problem definition, as they represent a specific challenge. To address these issues regarding dynamics and conflicting viewpoints, this work proposes a service selection optimization framework based on a multi-objective genetic algorithm capable of efficiently dealing with changing conditions by using a persistent memory of good solutions, and a stepwise adaptation of the mutation rate. A system and QoS attribute ontology as well as a description of dynamics of distributed systems build the basis of the framework. The presented approach is evaluated in terms of optimization quality, adaptability to changes, runtime performance and scalability
    • 

    corecore