5,425 research outputs found

    Multitenancy - Security Risks and Countermeasures

    Get PDF
    Security within the cloud is of paramount importance as the interest and indeed utilization of cloud computing increase. Multitenancy in particular introduces unique security risks to cloud computing as a result of more than one tenant utilizing the same physical computer hardware and sharing the same software and data. The purpose of this paper is to explore the specific risks in cloud computing due to Multitenancy and the measures that can be taken to mitigate those risks.Security within the cloud is of paramount importance as the interest and indeed utilization of cloud computing increase. Multitenancy in particular introduces unique security risks to cloud computing as a result of more than one tenant utilizing the same physical computer hardware and sharing the same software and data. The purpose of this paper is to explore the specific risks in cloud computing due to Multitenancy and the measures that can be taken to mitigate those risks

    User-centric Adaptation Analysis of Multi-tenant Services

    Get PDF
    Multi-tenancy is a key pillar of cloud services. It allows different users to share computing and virtual resources transparently, meanwhile guaranteeing substantial cost savings. Due to the tradeoff between scalability and customization, one of the major drawbacks of multi-tenancy is limited configurability. Since users may often have conflicting configuration preferences, offering the best user experience is an open challenge for service providers. In addition, the users, their preferences, and the operational environment may change during the service operation, thus jeopardizing the satisfaction of user preferences. In this article, we present an approach to support user-centric adaptation of multi-tenant services. We describe how to engineer the activities of the Monitoring, Analysis, Planning, Execution (MAPE) loop to support user-centric adaptation, and we focus on adaptation analysis. Our analysis computes a service configuration that optimizes user satisfaction, complies with infrastructural constraints, and minimizes reconfiguration obtrusiveness when user- or service-related changes take place. To support our analysis, we model multitenant services and user preferences by using feature and preference models, respectively. We illustrate our approach by utilizing different cases of virtual desktops. Our results demonstrate the effectiveness of the analysis in improving user preferences satisfaction in negligible time.Ministerio de EconomĂ­a y Competitividad TIN2012-32273Junta de AndalucĂ­a P12--TIC--1867Junta de AndalucĂ­a TIC-590

    Optimal deployment of components of cloud-hosted application for guaranteeing multitenancy isolation

    Get PDF
    One of the challenges of deploying multitenant cloud-hosted services that are designed to use (or be integrated with) several components is how to implement the required degree of isolation between the components when there is a change in the workload. Achieving the highest degree of isolation implies deploying a component exclusively for one tenant; which leads to high resource consumption and running cost per component. A low degree of isolation allows sharing of resources which could possibly reduce cost, but with known limitations of performance and security interference. This paper presents a model-based algorithm together with four variants of a metaheuristic that can be used with it, to provide near-optimal solutions for deploying components of a cloud-hosted application in a way that guarantees multitenancy isolation. When the workload changes, the model based algorithm solves an open multiclass QN model to determine the average number of requests that can access the components and then uses a metaheuristic to provide near-optimal solutions for deploying the components. Performance evaluation showed that the obtained solutions had low variability and percent deviation when compared to the reference/optimal solution. We also provide recommendations and best practice guidelines for deploying components in a way that guarantees the required degree of isolation

    Cloud enterprise resource planning development model based on software factory approach

    Get PDF
    Literature reviews revealed that Cloud Enterprise Resource Planning (Cloud ERP) is significantly growing, yet from software developers’ perspective, it has succumbed to high management complexity, high workload, inconsistency software quality, and knowledge retention problems. Previous researches lack a solution that holistically addresses all the research problem components. Software factory approach was chosen to be adapted along with relevant theories to develop a model referred to as Cloud ERP Factory Model (CEF Model), which intends to pave the way in solving the above-mentioned problems. There are three specific objectives, those are (i) to develop the model by identifying the components with its elements and compile them into the CEF Model, (ii) to verify the model’s deployment technical feasibility, and (iii) to validate the model field usability in a real Cloud ERP production case studies. The research employed Design Science methodology, with a mixed method evaluation approach. The developed CEF Model consists of five components; those are Product Lines, Platform, Workflow, Product Control, and Knowledge Management, which can be used to setup a CEF environment that simulates a process-oriented software production environment with capacity and resource planning features. The model was validated through expert reviews and the finalized model was verified to be technically feasible by a successful deployment into a selected commercial Cloud ERP production facility. Three Cloud ERP commercial deployment case studies were conducted using the prototype environment. Using the survey instruments developed, the results yielded a Likert score mean of 6.3 out of 7 thus reaffirming that the model is usable and the research has met its objective in addressing the problem components. The models along with its deployment verification processes are the main research contributions. Both items can also be used by software industry practitioners and academician as references in developing a robust Cloud ERP production facility

    Proposing Optimus Scheduler Algorithm for Virtual Machine Placement Within a Data Center

    Get PDF
    With the evolution of the Internet, we are witnessing the birth of an increasing number of applications that rely on the network; what was previously executed on the user's computers as stand-alone programs has been redesigned to be executed on servers with permanent connections to the Internet, making the information available from any device that has network access. Instead of buying a copy of a program, users can now pay to obtain access to it through the network, which is one of the models of cloud computing, Software as a Service (SaaS). The continuous growth of Internet bandwidth has also given rise to new multimedia applications, such as social networks and video over the Internet; and to complete this new paradigm, mobile platforms provide the ubiquity of information that allows people to stay connected. Service providers may own servers and data centers or, alternatively, may contract infrastructure providers that use economies of scale to offer access to servers as a service in the cloud computing model, i.e., Infrastructure as a Service (IaaS). As users become more dependent on cloud services and mobile platforms increase the ubiquity of the cloud, the quality of service becomes increasingly important. A fundamental metric that defines the quality of service is the delay of the information as it travels between the user computers and the servers, and between the servers themselves. Along with the quality of service and the costs, the energy consumption and the CO2 emissions are fundamental considerations in regard to planning cloud computing networks. In this research work, an Optimus Scheduler algorithm to be proposed for Add, Remove or Resize an application by using Tabu Search Algorithm

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities

    Effective Management of Hybrid Workloads in Public and Private Cloud Platforms.

    Get PDF
    As organizations increasingly adopt hybrid cloud architectures to meet their diverse computing needs, managing workloads across on-premises and on multiple cloud environments has become a critical challenge. This thesis explores the concept of hybrid workload management through the implementation of Azure Arc, a cutting-edge solution offered by Microsoft Azure. The primary objective of this study is to investigate how azure Arc enables efficient resource utilization and scalability for hybrid workloads. The research methodology involves a comprehensive analysis of the key features and functionalities of Azure Arc, coupled with practical experimentation in a simulated hybrid environment. The thesis begins by examining the fundamental principles of hybrid cloud computing and the associated workload management challenges. It then introduces Azure Arc as a novel approach that extends Azure control to on-premises and multi-cloud systems. The architecture, components, and integration mechanisms of Azure Arc are presented in detail, highlighting its ability to centralize management, enforce governance policies, and streamline operational tasks. This thesis contributes to the understanding of hybrid workload management by exploring the capabilities of Azure Arc. It provides valuable insights into the benefits of adopting this technology for organizations seeking to optimize resource utilization, streamline operations, and scale their workloads efficiently across on-premises and multi-cloud environments. The research findings serve as a foundation for further advancements in hybrid cloud computing and workload management strategies
    • …
    corecore