1,289 research outputs found

    A multi-tenant database framework for software and cloud computing applications

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.Cloud Computing is a new computing paradigm that transforms accessing computing resources from internal data centres to external service providers. This approach is rapidly becoming a standard for offering cost effective and elastic computing services that are used over the internet. Software as a service (SaaS) is one of the Cloud Computing service models that exploits economies of scale for SaaS service providers by offering the same software and computing environment for multiple tenants. This contemporary multi-tenant service requires a multi-tenant database design that can accommodate data for multiple tenants in one single database schema. Due to multi-tenant database resource sharing in this service, the multi-tenant schema should be highly secured, optimized, configurable, and extendable during runtime execution to fulfil the applications’ requirements of different tenants. However, traditional Relational Database Management Systems (RDBMS) do not support such multi-tenant database schema capabilities, and it is a significant challenge to enable RDBMS to support these capabilities. Therefore, one solution is using an intermediate software layer that mediates multi-tenant applications and RDBMS, to convert multi-tenant queries into regular database queries, and to execute them in a RDBMS. Developing such a multi-tenant software layer to manage and access tenants’ data is a hard and complex problem to solve and has significant complexities that involve longer development lifecycle. There are two main contributions of this thesis. Firstly, a proposal for a novel multi-tenant schema technique called Elastic Extension Tables (EET). Secondly, a proposal for a multi-tenant database framework prototype to implement EET schema in a RDBMS. This approach can be used to develop a software layer that mediates software applications and a RDBMS. This software layer aims to facilitate the development of software applications, and multi-tenant SaaS and Big Data applications for both cloud service providers and their tenants. Extensive experiments were conducted to evaluate the feasibility and effectiveness of EET multi-tenant database schema by comparing it with Universal Table Schema Mapping (UTSM), which is commercially used. Significant performance improvements obtained using EET when compared to UTSM, makes the EET schema a good candidate for implementing multi-tenant databases and multi-tenant applications. Furthermore, the prototype of the EET framework was developed, and several experiments were performed to verify the practicability and the effectiveness of using this framework that based on EET multi-tenant database schema. The results of the experiments indicate that the EET framework is suitable for the development of software applications in general, and multi-tenant SaaS and Big Data applications in particular

    A native enhanced elastic extension tables multi-tenant database

    Get PDF
    A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique

    Using Microservices to Customize Multi-Tenant SaaS: From Intrusive to Non-Intrusive

    Get PDF
    Customization is a widely adopted practice on enterprise software applications such as Enterprise resource planning (ERP) or Customer relation management (CRM). Software vendors deploy their enterprise software product on the premises of a customer, which is then often customized for different specific needs of the customer. When enterprise applications are moving to the cloud as mutli-tenant Software-as-a-Service (SaaS), the traditional way of on-premises customization faces new challenges because a customer no longer has an exclusive control to the application. To empower businesses with specific requirements on top of the shared standard SaaS, vendors need a novel approach to support the customization on the multi-tenant SaaS. In this paper, we summarize our two approaches for customizing multi-tenant SaaS using microservices: intrusive and non-intrusive. The paper clarifies the key concepts related to the problem of multi-tenant customization, and describes a design with a reference architecture and high-level principles. We also discuss the key technical challenges and the feasible solutions to implement this architecture. Our microservice-based customization solution is promising to meet the general customization requirements, and achieves a balance between isolation, assimilation and economy of scale

    Implementing Azure Active Directory Integration with an Existing Cloud Service

    Get PDF
    Training Simulator (TraSim) is an online, web-based platform for holding crisis management exercises. It simulates epidemics and other exceptional situations to test the functionality of an organization’s operating instructions in the hour of need. The main objective of this thesis is to further develop the service by delegating its existing authentication and user provisioning mechanisms to a centralized, cloud-based Identity and Access Management (IAM) service. Making use of a centralized access control service is widely known as a Single Sign-On (SSO) implementation which comes with multiple benefits such as increased security, reduced administrative overhead and improved user experience. The objective originates from a customer organization’s request to enable SSO for TraSim. The research mainly focuses on implementing SSO by integrating TraSim with Azure Active Directory (AD) from a wide range of IAM services since it is considered as an industry standard and already utilized by the customer. Anyhow, the complexity of the integration is kept as reduced as possible to retain compatibility with other services besides Azure AD. While the integration is a unique operation with an endless amount of software stacks that a service can build on and multiple IAM services to choose from, this thesis aims to provide a general guideline of how to approach a resembling assignment. Conducting the study required extensive search and evaluation of the available literature about terms such as IAM, client-server communication, SSO, cloud services and AD. The literature review is combined with an introduction to the basic technologies that TraSim is built with to justify the choice of OpenID Connect as the authentication protocol and why it was implemented using the mozilla-django-oidc library. The literature consists of multiple online articles, publications and the official documentation of the utilized technologies. The research uses a constructive approach as it focuses into developing and testing a new feature that is merged into the source code of an already existing piece of software

    Effective Management of Hybrid Workloads in Public and Private Cloud Platforms

    Get PDF
    As organizations increasingly adopt hybrid cloud architectures to meet their diverse computing needs, managing workloads across on-premises and on multiple cloud environments has become a critical challenge. This thesis explores the concept of hybrid workload management through the implementation of Azure Arc, a cutting-edge solution offered by Microsoft Azure. The primary objective of this study is to investigate how Azure Arc enables efficient resource utilization and scalability for hybrid workloads. The research methodology involves a comprehensive analysis of the key features and functionalities of Azure Arc, coupled with practical experimentation in a simulated hybrid environment. The thesis begins by examining the fundamental principles of hybrid cloud computing and the associated workload management challenges. It then introduces Azure Arc as a novel approach that extends Azure control to on-premises and multi-cloud systems. The architecture, components, and integration mechanisms of Azure Arc are presented in detail, highlighting its ability to centralize management, enforce governance policies, and streamline operational tasks. This thesis contributes to the understanding of hybrid workload management by exploring the capabilities of Azure Arc. It provides valuable insights into the benefits of adopting this technology for organizations seeking to optimize resource utilization, streamline operations, and scale their workloads efficiently across on-premises and multi-cloud environments. The research findings serve as a foundation for further advancements in hybrid cloud computing and workload management strategies

    Effective Management of Hybrid Workloads in Public and Private Cloud Platforms.

    Get PDF
    As organizations increasingly adopt hybrid cloud architectures to meet their diverse computing needs, managing workloads across on-premises and on multiple cloud environments has become a critical challenge. This thesis explores the concept of hybrid workload management through the implementation of Azure Arc, a cutting-edge solution offered by Microsoft Azure. The primary objective of this study is to investigate how azure Arc enables efficient resource utilization and scalability for hybrid workloads. The research methodology involves a comprehensive analysis of the key features and functionalities of Azure Arc, coupled with practical experimentation in a simulated hybrid environment. The thesis begins by examining the fundamental principles of hybrid cloud computing and the associated workload management challenges. It then introduces Azure Arc as a novel approach that extends Azure control to on-premises and multi-cloud systems. The architecture, components, and integration mechanisms of Azure Arc are presented in detail, highlighting its ability to centralize management, enforce governance policies, and streamline operational tasks. This thesis contributes to the understanding of hybrid workload management by exploring the capabilities of Azure Arc. It provides valuable insights into the benefits of adopting this technology for organizations seeking to optimize resource utilization, streamline operations, and scale their workloads efficiently across on-premises and multi-cloud environments. The research findings serve as a foundation for further advancements in hybrid cloud computing and workload management strategies

    Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing

    Full text link
    Tesis por compendio[ES] Las aplicaciones científicas implican generalmente una carga computacional variable y no predecible a la que las instituciones deben hacer frente variando dinámicamente la asignación de recursos en función de las distintas necesidades computacionales. Las aplicaciones científicas pueden necesitar grandes requisitos. Por ejemplo, una gran cantidad de recursos computacionales para el procesado de numerosos trabajos independientes (High Throughput Computing o HTC) o recursos de alto rendimiento para la resolución de un problema individual (High Performance Computing o HPC). Los recursos computacionales necesarios en este tipo de aplicaciones suelen acarrear un coste muy alto que puede exceder la disponibilidad de los recursos de la institución o estos pueden no adaptarse correctamente a las necesidades de las aplicaciones científicas, especialmente en el caso de infraestructuras preparadas para la ejecución de aplicaciones de HPC. De hecho, es posible que las diferentes partes de una aplicación necesiten distintos tipos de recursos computacionales. Actualmente las plataformas de servicios en la nube se han convertido en una solución eficiente para satisfacer la demanda de las aplicaciones HTC, ya que proporcionan un abanico de recursos computacionales accesibles bajo demanda. Por esta razón, se ha producido un incremento en la cantidad de clouds híbridos, los cuales son una combinación de infraestructuras alojadas en servicios en la nube y en las propias instituciones (on-premise). Dado que las aplicaciones pueden ser procesadas en distintas infraestructuras, actualmente la portabilidad de las aplicaciones se ha convertido en un aspecto clave. Probablemente, las tecnologías de contenedores son la tecnología más popular para la entrega de aplicaciones gracias a que permiten reproducibilidad, trazabilidad, versionado, aislamiento y portabilidad. El objetivo de la tesis es proporcionar una arquitectura y una serie de servicios para proveer infraestructuras elásticas híbridas de procesamiento que puedan dar respuesta a las diferentes cargas de trabajo. Para ello, se ha considerado la utilización de elasticidad vertical y horizontal desarrollando una prueba de concepto para proporcionar elasticidad vertical y se ha diseñado una arquitectura cloud elástica de procesamiento de Análisis de Datos. Después, se ha trabajo en una arquitectura cloud de recursos heterogéneos de procesamiento de imágenes médicas que proporciona distintas colas de procesamiento para trabajos con diferentes requisitos. Esta arquitectura ha estado enmarcada en una colaboración con la empresa QUIBIM. En la última parte de la tesis, se ha evolucionado esta arquitectura para diseñar e implementar un cloud elástico, multi-site y multi-tenant para el procesamiento de imágenes médicas en el marco del proyecto europeo PRIMAGE. Esta arquitectura utiliza un almacenamiento distribuido integrando servicios externos para la autenticación y la autorización basados en OpenID Connect (OIDC). Para ello, se ha desarrollado la herramienta kube-authorizer que, de manera automatizada y a partir de la información obtenida en el proceso de autenticación, proporciona el control de acceso a los recursos de la infraestructura de procesamiento mediante la creación de las políticas y roles. Finalmente, se ha desarrollado otra herramienta, hpc-connector, que permite la integración de infraestructuras de procesamiento HPC en infraestructuras cloud sin necesitar realizar cambios en la infraestructura HPC ni en la arquitectura cloud. Cabe destacar que, durante la realización de esta tesis, se han utilizado distintas tecnologías de gestión de trabajos y de contenedores de código abierto, se han desarrollado herramientas y componentes de código abierto y se han implementado recetas para la configuración automatizada de las distintas arquitecturas diseñadas desde la perspectiva DevOps.[CA] Les aplicacions científiques impliquen generalment una càrrega computacional variable i no predictible a què les institucions han de fer front variant dinàmicament l'assignació de recursos en funció de les diferents necessitats computacionals. Les aplicacions científiques poden necessitar grans requisits. Per exemple, una gran quantitat de recursos computacionals per al processament de nombrosos treballs independents (High Throughput Computing o HTC) o recursos d'alt rendiment per a la resolució d'un problema individual (High Performance Computing o HPC). Els recursos computacionals necessaris en aquest tipus d'aplicacions solen comportar un cost molt elevat que pot excedir la disponibilitat dels recursos de la institució o aquests poden no adaptar-se correctament a les necessitats de les aplicacions científiques, especialment en el cas d'infraestructures preparades per a l'avaluació d'aplicacions d'HPC. De fet, és possible que les diferents parts d'una aplicació necessiten diferents tipus de recursos computacionals. Actualment les plataformes de servicis al núvol han esdevingut una solució eficient per satisfer la demanda de les aplicacions HTC, ja que proporcionen un ventall de recursos computacionals accessibles a demanda. Per aquest motiu, s'ha produït un increment de la quantitat de clouds híbrids, els quals són una combinació d'infraestructures allotjades a servicis en el núvol i a les mateixes institucions (on-premise). Donat que les aplicacions poden ser processades en diferents infraestructures, actualment la portabilitat de les aplicacions s'ha convertit en un aspecte clau. Probablement, les tecnologies de contenidors són la tecnologia més popular per a l'entrega d'aplicacions gràcies al fet que permeten reproductibilitat, traçabilitat, versionat, aïllament i portabilitat. L'objectiu de la tesi és proporcionar una arquitectura i una sèrie de servicis per proveir infraestructures elàstiques híbrides de processament que puguen donar resposta a les diferents càrregues de treball. Per a això, s'ha considerat la utilització d'elasticitat vertical i horitzontal desenvolupant una prova de concepte per proporcionar elasticitat vertical i s'ha dissenyat una arquitectura cloud elàstica de processament d'Anàlisi de Dades. Després, s'ha treballat en una arquitectura cloud de recursos heterogenis de processament d'imatges mèdiques que proporciona distintes cues de processament per a treballs amb diferents requisits. Aquesta arquitectura ha estat emmarcada en una col·laboració amb l'empresa QUIBIM. En l'última part de la tesi, s'ha evolucionat aquesta arquitectura per dissenyar i implementar un cloud elàstic, multi-site i multi-tenant per al processament d'imatges mèdiques en el marc del projecte europeu PRIMAGE. Aquesta arquitectura utilitza un emmagatzemament integrant servicis externs per a l'autenticació i autorització basats en OpenID Connect (OIDC). Per a això, s'ha desenvolupat la ferramenta kube-authorizer que, de manera automatitzada i a partir de la informació obtinguda en el procés d'autenticació, proporciona el control d'accés als recursos de la infraestructura de processament mitjançant la creació de les polítiques i rols. Finalment, s'ha desenvolupat una altra ferramenta, hpc-connector, que permet la integració d'infraestructures de processament HPC en infraestructures cloud sense necessitat de realitzar canvis en la infraestructura HPC ni en l'arquitectura cloud. Es pot destacar que, durant la realització d'aquesta tesi, s'han utilitzat diferents tecnologies de gestió de treballs i de contenidors de codi obert, s'han desenvolupat ferramentes i components de codi obert, i s'han implementat receptes per a la configuració automatitzada de les distintes arquitectures dissenyades des de la perspectiva DevOps.[EN] Scientific applications generally imply a variable and an unpredictable computational workload that institutions must address by dynamically adjusting the allocation of resources to their different computational needs. Scientific applications could require a high capacity, e.g. the concurrent usage of computational resources for processing several independent jobs (High Throughput Computing or HTC) or a high capability by means of using high-performance resources for solving complex problems (High Performance Computing or HPC). The computational resources required in this type of applications usually have a very high cost that may exceed the availability of the institution's resources or they are may not be successfully adapted to the scientific applications, especially in the case of infrastructures prepared for the execution of HPC applications. Indeed, it is possible that the different parts that compose an application require different type of computational resources. Nowadays, cloud service platforms have become an efficient solution to meet the need of HTC applications as they provide a wide range of computing resources accessible on demand. For this reason, the number of hybrid computational infrastructures has increased during the last years. The hybrid computation infrastructures are the combination of infrastructures hosted in cloud platforms and the computation resources hosted in the institutions, which are named on-premise infrastructures. As scientific applications can be processed on different infrastructures, the application delivery has become a key issue. Nowadays, containers are probably the most popular technology for application delivery as they ease reproducibility, traceability, versioning, isolation, and portability. The main objective of this thesis is to provide an architecture and a set of services to build up hybrid processing infrastructures that fit the need of different workloads. Hence, the thesis considered aspects such as elasticity and federation. The use of vertical and horizontal elasticity by developing a proof of concept to provide vertical elasticity on top of an elastic cloud architecture for data analytics. Afterwards, an elastic cloud architecture comprising heterogeneous computational resources has been implemented for medical imaging processing using multiple processing queues for jobs with different requirements. The development of this architecture has been framed in a collaboration with a company called QUIBIM. In the last part of the thesis, the previous work has been evolved to design and implement an elastic, multi-site and multi-tenant cloud architecture for medical image processing has been designed in the framework of a European project PRIMAGE. This architecture uses a storage integrating external services for the authentication and authorization based on OpenID Connect (OIDC). The tool kube-authorizer has been developed to provide access control to the resources of the processing infrastructure in an automatic way from the information obtained in the authentication process, by creating policies and roles. Finally, another tool, hpc-connector, has been developed to enable the integration of HPC processing infrastructures into cloud infrastructures without requiring modifications in both infrastructures, cloud and HPC. It should be noted that, during the realization of this thesis, different contributions to open source container and job management technologies have been performed by developing open source tools and components and configuration recipes for the automated configuration of the different architectures designed from the DevOps perspective. The results obtained support the feasibility of the vertical elasticity combined with the horizontal elasticity to implement QoS policies based on a deadline, as well as the feasibility of the federated authentication model to combine public and on-premise clouds.López Huguet, S. (2021). Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172327TESISCompendi
    corecore