22 research outputs found

    Using Microservices to Customize Multi-Tenant SaaS: From Intrusive to Non-Intrusive

    Get PDF
    Customization is a widely adopted practice on enterprise software applications such as Enterprise resource planning (ERP) or Customer relation management (CRM). Software vendors deploy their enterprise software product on the premises of a customer, which is then often customized for different specific needs of the customer. When enterprise applications are moving to the cloud as mutli-tenant Software-as-a-Service (SaaS), the traditional way of on-premises customization faces new challenges because a customer no longer has an exclusive control to the application. To empower businesses with specific requirements on top of the shared standard SaaS, vendors need a novel approach to support the customization on the multi-tenant SaaS. In this paper, we summarize our two approaches for customizing multi-tenant SaaS using microservices: intrusive and non-intrusive. The paper clarifies the key concepts related to the problem of multi-tenant customization, and describes a design with a reference architecture and high-level principles. We also discuss the key technical challenges and the feasible solutions to implement this architecture. Our microservice-based customization solution is promising to meet the general customization requirements, and achieves a balance between isolation, assimilation and economy of scale

    Event-based Customization of Multi-tenant SaaS Using Microservices

    Get PDF
    Popular enterprise software such as ERP, CRM is now being made available on the Cloud in the multi-tenant Software as a Service (SaaS) model. The added values come from the ability of vendors to enable customer-specific business advantage for every different tenant who uses the same main enterprise software product. Software vendors need novel customization solutions for Cloud-based multi-tenant SaaS. In this paper, we present an event-based approach in a non-intrusive customization framework that can enable customization for multi-tenant SaaS and address the problem of too many API calls to the main software product. The experimental results on Microsoft’s eShopOnContainers show that our approach can empower an event bus with the ability to customize the flow of processing events, and integrate with tenant-specific microservices for customization. We have shown how our approach makes sure of tenant-isolation, which is crucial in practice for SaaS vendors. This direction can also reduce the number of API calls to the main software product, even when every tenant has different customization services.publishedVersio

    Migrating Monoliths to Microservices-based Customizable Multi-tenant Cloud-native Apps

    Get PDF
    It was common that software vendors sell licenses to their clients to use software products, such as Enterprise Resource Planning, which are deployed as a monolithic entity on clients’ premises. Moreover, many clients, especially big organizations, often require software products to be customized for their specific needs before deployment on premises. While software vendors are trying to migrate their monolithic software products to Cloud-native Software-as-a-Service (SaaS), they face two big challenges that this paper aims at addressing: 1) How to migrate their exclusive monoliths to multi-tenant Cloud-native SaaS; and 2) How to enable tenant-specific customization for multi-tenant Cloud-native SaaS. This paper suggests an approach for migrating monoliths to microservice-based Cloud-native SaaS, providing customers with a flexible customization opportunity, while taking advantage of the economies of scale that the Cloud and multi-tenancy provide. Our approach shows not only the migration to microservices but also how to introduce the necessary infrastructure to support the new services and enable tenant-specific customization. We illustrate the application of our approach on migrating a reference application of Microsoft called SportStore.acceptedVersio

    Implementing a maintainable and secure tenancy model

    Get PDF
    Software-as-a-Service is a popular software delivery model that provides subscription-based services for customers. In this thesis, we identify key aspects of implementing a maintainable and secure tenancy model through analyzing research literature and focusing on a case study. We also study whether it is beneficial to change a single-tenant implementation to a multi-tenant implementation in terms of maintainability and security. We research common tenancy models and security issues in SaaS products. Based on these, we set out to analyze a case study product, identifying potential problems in its single-tenant implementation. We then decide on changing said model, and show the process of implementing a new hybrid model. Finally, we present validation methods on measuring the effectiveness of such implementation. We identified data security and isolation, efficiency and performance, administrative manageability, scalability and profitability to be the most important quality aspects to consider when choosing a maintainable and secure tenancy model. We also recognize that it is beneficial to change from a single-tenant implementation to a multi-tenant implementation in terms of these aspects

    Unified Management of Applications on Heterogeneous Clouds

    Get PDF
    La diversidad con la que los proveedores cloud ofrecen sus servicios, definiendo sus propias interfaces y acuerdos de calidad y de uso, dificulta la portabilidad y la interoperabilidad entre proveedores, lo que incurre en el problema conocido como el bloqueo del vendedor. Dada la heterogeneidad que existe entre los distintos niveles de abstracción del cloud, como IaaS y PaaS, hace que desarrollar aplicaciones agnósticas que sean independientes de los proveedores y los servicios en los que se van a desplegar sea aún un desafío. Esto también limita la posibilidad de migrar los componentes de aplicaciones cloud en ejecución a nuevos proveedores. Esta falta de homogeneidad también dificulta el desarrollo de procesos para operar las aplicaciones que sean robustos ante los errores que pueden ocurrir en los distintos proveedores y niveles de abstracción. Como resultado, las aplicaciones pueden quedar ligadas a los proveedores para las que fueron diseñadas, limitando la capacidad de los desarrolladores para reaccionar ante cambios en los proveedores o en las propias aplicaciones. En esta tesis se define trans-cloud como una nueva dimensión que unifica la gestión de distintos proveedores y niveles de servicios, IaaS y PaaS, bajo una misma API y hace uso del estándar TOSCA para describir aplicaciones agnósticas y portables, teniendo procesos automatizados, por ejemplo para el despliegue. Por otro lado, haciendo uso de las topologías estructuradas de TOSCA, trans-cloud propone un algoritmo genérico para la migración de componentes de aplicaciones en ejecución. Además, trans-cloud unifica la gestión de los errores, permitiendo tener procesos robustos y agnósticos para gestionar el ciclo de vida de las aplicaciones, independientemente de los proveedores y niveles de servicio donde se estén ejecutando. Por último, se presentan los casos de uso y los resultados de los experimentos usados para validar cada una de estas propuestas

    Model-based fleet deployment in the IoT–edge–cloud continuum

    Get PDF
    With the increasing computing and networking capabilities, IoT devices and edge gateways have become part of a larger IoT–edge–cloud computing continuum, where processing and storage tasks are distributed across the whole network hierarchy, not concentrated only in the cloud. At the same time, this also introduced continuous delivery practices to the development of software components for network-connected gateways and sensing/actuating nodes. These devices are placed on end users’ premises and are characterized by continuously changing cyber-physical contexts, forcing software developers to maintain multiple application versions and frequently redeploy them on a distributed fleet of devices with respect to their current contexts. Doing this correctly and efficiently goes beyond manual capabilities and requires an intelligent and reliable automated solution. This paper describes a model-based approach to automatically assigning multiple software deployment plans to hundreds of edge gateways and connected IoT devices implemented in collaboration with a smart healthcare application provider. From a platform-specific model of an existing edge computing platform, we extract a platform-independent model that describes a list of target devices and a pool of available deployment plans. Next, we use constraint solving to automatically assign deployment plans to devices at once with respect to their specific contexts. The result is transformed back into the platform-specific model and includes a suitable deployment plan for each device, which is then consumed by our engine to deploy software components not only on edge gateways but also on their downstream IoT devices with constrained resources and connectivity. We validate the approach with a fleet deployment prototype integrated into a DevOps toolchain used by the partner application provider. Initial experiments demonstrate the viability of the approach and its usefulness in supporting DevOps for edge and IoT software development.publishedVersio

    Monitoring self-adaptive applications within edge computing frameworks: A state-of-the-art review

    Get PDF
    Recently, a promising trend has evolved from previous centralized computation to decentralized edge computing in the proximity of end-users to provide cloud applications. To ensure the Quality of Service (QoS) of such applications and Quality of Experience (QoE) for the end-users, it is necessary to employ a comprehensive monitoring approach. Requirement analysis is a key software engineering task in the whole lifecycle of applications; however, the requirements for monitoring systems within edge computing scenarios are not yet fully established. The goal of the present survey study is therefore threefold: to identify the main challenges in the field of monitoring edge computing applications that are as yet not fully solved; to present a new taxonomy of monitoring requirements for adaptive applications orchestrated upon edge computing frameworks; and to discuss and compare the use of widely-used cloud monitoring technologies to assure the performance of these applications. Our analysis shows that none of existing widely-used cloud monitoring tools yet provides an integrated monitoring solution within edge computing frameworks. Moreover, some monitoring requirements have not been thoroughly met by any of them

    Virtual Machine Flow Analysis Using Host Kernel Tracing

    Get PDF
    L’infonuagique a beaucoup gagné en popularité car elle permet d’offrir des services à coût réduit, avec le modèle économique Pay-to-Use, un stockage illimité avec les systèmes de stockage distribué, et une grande puissance de calcul grâce à l’accès direct au matériel. La technologie de virtualisation permet de partager un serveur physique entre plusieurs environnements virtualisés isolés, en déployant une couche logicielle (Hyperviseur) au-dessus du matériel. En conséquence, les environnements isolés peuvent fonctionner avec des systèmes d’exploitation et des applications différentes, sans interférence mutuelle. La croissance du nombre d’utilisateurs des services infonuagiques et la démocratisation de la technologie de virtualisation présentent un nouveau défi pour les fournisseurs de services infonuagiques. Fournir une bonne qualité de service et une haute disponibilité est une exigence principale pour l’infonuagique. La raison de la dégradation des performances d’une machine virtuelle peut être nombreuses. a Activité intense d’une application à l’intérieur de la machine virtuelle. b Conflits avec d’autres applications à l’intérieur de la machine même virtuelle. c Conflits avec d’autres machines virtuelles qui roulent sur la même machine physique. d Échecs de la plateforme infonuagique. Les deux premiers cas peuvent être gérés par le propriétaire de la machine virtuelle et les autres cas doivent être résolus par le fournisseur de l’infrastructure infonuagique. Ces infrastructures sont généralement très complexes et peuvent contenir différentes couches de virtualisation. Il est donc nécessaire d’avoir un outil d’analyse à faible surcoût pour détecter ces types de problèmes. Dans cette thèse, nous présentons une méthode précise permettant de récupérer le flux d’exécution des environnements virtualisés à partir de la machine hôte, quel que soit le niveau de la virtualisation. Pour éviter des problèmes de sécurité, faciliter le déploiement et minimiser le surcoût, notre méthode limite la collecte de données au niveau de l’hyperviseur. Pour analyser le comportement des machines virtuelles, nous utilisons un outil de traçage léger appelé Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng est capable d’effectuer un traçage à haut débit et à faible surcoût, grâce aux mécanismes de synchronisation sans verrous utilisés pour mettre à jour le contenu des tampons de traçage.----------ABSTRACT: Cloud computing has gained popularity as it offers services at lower cost, with Pay-per-Use model, unlimited storage, with distributed storage, and flexible computational power, with direct hardware access. Virtualization technology allows to share a physical server, between several isolated virtualized environments, by deploying an hypervisor layer on top of hardware. As a result, each isolated environment can run with its OS and application without mutual interference. With the growth of cloud usage, and the use of virtualization, performance understanding and debugging are becoming a serious challenge for Cloud providers. Offering a better QoS and high availability are expected to be salient features of cloud computing. Nonetheless, possible reasons behind performance degradation in VMs are numerous. a) Heavy load of an application inside the VM. b) Contention with other applications inside the VM. c) Contention with other co-located VMs. d) Cloud platform failures. The first two cases can be managed by the VM owner, while the other cases need to be solved by the infrastructure provider. One key requirement for such a complex environment, with different virtualization layers, is a precise low overhead analysis tool. In this thesis, we present a host-based, precise method to recover the execution flow of virtualized environments, regardless of the level of nested virtualization. To avoid security issues, ease deployment and reduce execution overhead, our method limits its data collection to the hypervisor level. In order to analyse the behavior of each VM, we use a lightweight tracing tool called the Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng is optimised for high throughput tracing with low overhead, thanks to its lock-free synchronization mechanisms used to update the trace buffer content

    The 6G Architecture Landscape:European Perspective

    Get PDF
    corecore