398 research outputs found

    A model for optimising the deployment of cloud-hosted application components for guaranteeing multitenancy isolation.

    Get PDF
    Tenants associated with a cloud-hosted application seek to reduce running costs and minimize resource consumption by sharing components and resources. However, despite the benefits, sharing resources can affect tenant’s access and overall performance if one tenant abruptly experiences a significant workload, particularly if the application fails to accommodate this sudden increase in workload. In cases where a there is a higher or varying degree of isolation between components, this issue can become severe. This paper aims to present novel solutions for deploying components of a cloud-hosted application with the purpose of guaranteeing the required degree of multitenancy isolation through a mathematical optimization model and metaheuristic algorithm. Research conducted through this paper demonstrates that, when compared, optimal solutions achieved through the model had low variability levels and percent deviation. This paper additionally provides areas of application of our optimization model as well as challenges and recommendations for deploying components associated with varying degrees of isolation

    Revenue maximization problems in commercial data centers

    Get PDF
    PhD ThesisAs IT systems are becoming more important everyday, one of the main concerns is that users may face major problems and eventually incur major costs if computing systems do not meet the expected performance requirements: customers expect reliability and performance guarantees, while underperforming systems loose revenues. Even with the adoption of data centers as the hub of IT organizations and provider of business efficiencies the problems are not over because it is extremely difficult for service providers to meet the promised performance guarantees in the face of unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs), contracts that specify a level of performance that must be met and compensations in case of failure. In this thesis I will address some of the performance problems arising when IT companies sell the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim is to improve the efficiency of service provisioning systems by allowing them to adapt to changing demand conditions. First, I will define the problem in terms of an utility function to maximize. Two different models are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then, I will introduce an autonomic model for service provision. The architecture consists of a set of hosted applications that share a certain number of servers. The system collects demand and performance statistics and estimates traffic parameters. These estimates are used by management policies which implement dynamic resource allocation and admission algorithms. Results from a number of experiments show that the performance of these heuristics is close to optimal.QoSP (Quality of Service Provisioning)British Teleco

    Active aging in place supported by caregiver-centered modular low-cost platform

    Get PDF
    Aging in place happens when people age in the residence of their choice, usually their homes because is their preference for living as long as possible. This research work is focused on the conceptualization and implementation of a platform to support active aging in place with a particular focus on the caregivers and their requirements to accomplish their tasks with comfort and supervision. An engagement dimension is also a plus provided by the platform since it supports modules to make people react to challenges, stimulating them to be naturally more active. The platform is supported by IoT, using low-cost technology to increment the platform modularly. Is a modular platform capable of responding to specific needs of seniors aging in place and their caregivers, obtaining data regarding the person under supervision, as well as providing conditions for constant and more effective monitoring, through modules and tools that support decision making and tasks realization for active living. The constant monitoring allows knowing the routine of daily activities of the senior. The use of machine learning techniques allows the platform to identify, in real-time, situations of potential risk, allowing to trigger triage processes with the older adult, and consequently trigger the necessary actions so that the caregiver can intervene in useful time.O envelhecimento no local acontece quando as pessoas envelhecem na residência da sua escolha, geralmente nas suas próprias casas porque é a sua preferência para viver o máximo de tempo possível. Este trabalho de investigação foca-se na conceptualização e implementação de uma plataforma de apoio ao envelhecimento ativo no local, com particular enfoque nos cuidadores e nas suas necessidades para cumprir as suas tarefas com conforto e supervisão. Uma dimensão de engajamento também é um diferencial da plataforma, pois esta integra módulos de desafios para fazer as pessoas reagirem aos mesmos, estimulando-as a serem naturalmente mais ativas. A plataforma é suportada por IoT, utilizando tecnologia de baixo custo para incrementar a plataforma de forma modular. É uma plataforma modular capaz de responder às necessidades específicas do envelhecimento dos idosos no local e dos seus cuidadores, obtendo dados relativos à pessoa sob supervisão, bem como fornecendo condições para um acompanhamento constante e mais eficaz, através de módulos e ferramentas que apoiam a tomada de decisões e realização de tarefas para a vida ativa. A monitorização constante permite conhecer a rotina das atividades diárias do idoso, permitindo que, com a utilização de técnicas de machine learning, a plataforma seja capaz de detetar em tempo real situações de risco potencial, permitindo desencadear um processo de triagem junto do idoso, e consequentemente despoletar as ações necessárias para que o prestador de cuidados possa intervir em tempo útil

    Architecting the deployment of cloud-hosted services for guaranteeing multitenancy isolation.

    Get PDF
    In recent years, software tools used for Global Software Development (GSD) processes (e.g., continuous integration, version control and bug tracking) are increasingly being deployed in the cloud to serve multiple users. Multitenancy is an important architectural property in cloud computing in which a single instance of an application is used to serve multiple users. There are two key challenges of implementing multitenancy: (i) ensuring isolation either between multiple tenants accessing the service or components designed (or integrated) with the service; and (ii) resolving trade-offs between varying degrees of isolation between tenants or components. The aim of this thesis is to investigate how to architect the deployment of cloud-hosted service while guaranteeing the required degree of multitenancy isolation. Existing approaches for architecting the deployment of cloud-hosted services to serve multiple users have paid little attention to evaluating the effect of the varying degrees of multitenancy isolation on the required performance, resource consumption and access privilege of tenants (or components). Approaches for isolating tenants (or components) are usually implemented at lower layers of the cloud stack and often apply to the entire system and not to individual tenants (or components). This thesis adopts a multimethod research strategy to providing a set of novel approaches for addressing these problems. Firstly, a taxonomy of deployment patterns and a general process, CLIP (CLoud-based Identification process for deployment Patterns) was developed for guiding architects in selecting applicable cloud deployment patterns (together with the supporting technologies) using the taxonomy for deploying services to the cloud. Secondly, an approach named COMITRE (COmponent-based approach to Multitenancy Isolation Through request RE-routing) was developed together with supporting algorithms and then applied to three case studies to empirically evaluate the varying degrees of isolation between tenants enabled by multitenancy patterns for three different cloud-hosted GSD processes, namely-continuous integration, version control, and bug tracking. After that, a synthesis of findings from the three case studies was carried out to provide an explanatory framework and new insights about varying degrees of multitenancy isolation. Thirdly, a model-based decision support system together with four variants of a metaheuristic solution was developed for solving the model to provide an optimal solution for deploying components of a cloud-hosted application with guarantees for multitenancy isolation. By creating and applying the taxonomy, it was learnt that most deployment patterns are related and can be implemented by combining with others, for example, in hybrid deployment scenarios to integrate data residing in multiple clouds. It has been argued that the shared component is better for reducing resource consumption while the dedicated component is better in avoiding performance interference. However, as the experimental results show, there are certain GSD processes where that might not necessarily be so, for example, in version control, where additional copies of the files are created in the repository, thus consuming more disk space. Over time, performance begins to degrade as more time is spent searching across many files on the disk. Extensive performance evaluation of the model-based decision support system showed that the optimal solutions obtained had low variability and percent deviation, and were produced with low computational effort when compared to a given target solution

    Stochastic Dynamic Programming and Stochastic Fluid-Flow Models in the Design and Analysis of Web-Server Farms

    Get PDF
    A Web-server farm is a specialized facility designed specifically for housing Web servers catering to one or more Internet facing Web sites. In this dissertation, stochastic dynamic programming technique is used to obtain the optimal admission control policy with different classes of customers, and stochastic uid- ow models are used to compute the performance measures in the network. The two types of network traffic considered in this research are streaming (guaranteed bandwidth per connection) and elastic (shares available bandwidth equally among connections). We first obtain the optimal admission control policy using stochastic dynamic programming, in which, based on the number of requests of each type being served, a decision is made whether to allow or deny service to an incoming request. In this subproblem, we consider a xed bandwidth capacity server, which allocates the requested bandwidth to the streaming requests and divides all of the remaining bandwidth equally among all of the elastic requests. The performance metric of interest in this case will be the blocking probability of streaming traffic, which will be computed in order to be able to provide Quality of Service (QoS) guarantees. Next, we obtain bounds on the expected waiting time in the system for elastic requests that enter the system. This will be done at the server level in such a way that the total available bandwidth for the requests is constant. Trace data will be converted to an ON-OFF source and fluid- flow models will be used for this analysis. The results are compared with both the mean waiting time obtained by simulating real data, and the expected waiting time obtained using traditional queueing models. Finally, we consider the network of servers and routers within the Web farm where data from servers flows and merges before getting transmitted to the requesting users via the Internet. We compute the waiting time of the elastic requests at intermediate and edge nodes by obtaining the distribution of the out ow of the upstream node. This out ow distribution is obtained by using a methodology based on minimizing the deviations from the constituent in flows. This analysis also helps us to compute waiting times at different bandwidth capacities, and hence obtain a suitable bandwidth to promise or satisfy the QoS guarantees. This research helps in obtaining performance measures for different traffic classes at a Web-server farm so as to be able to promise or provide QoS guarantees; while at the same time helping in utilizing the resources of the server farms efficiently, thereby reducing the operational costs and increasing energy savings

    Mining Behavior of Citizen Sensor Communities to Improve Cooperation with Organizational Actors

    Get PDF
    Web 2.0 (social media) provides a natural platform for dynamic emergence of citizen (as) sensor communities, where the citizens generate content for sharing information and engaging in discussions. Such a citizen sensor community (CSC) has stated or implied goals that are helpful in the work of formal organizations, such as an emergency management unit, for prioritizing their response needs. This research addresses questions related to design of a cooperative system of organizations and citizens in CSC. Prior research by social scientists in a limited offline and online environment has provided a foundation for research on cooperative behavior challenges, including \u27articulation\u27 and \u27awareness\u27, but Web 2.0 supported CSC offers new challenges as well as opportunities. A CSC presents information overload for the organizational actors, especially in finding reliable information providers (for awareness), and finding actionable information from the data generated by citizens (for articulation). Also, we note three data level challenges: ambiguity in interpreting unconstrained natural language text, sparsity of user behaviors, and diversity of user demographics. Interdisciplinary research involving social and computer sciences is essential to address these socio-technical issues. I present a novel web information-processing framework, called the Identify-Match- Engage (IME) framework. IME allows operationalizing computation in design problems of awareness and articulation of the cooperative system between citizens and organizations, by addressing data problems of group engagement modeling and intent mining. The IME framework includes: a.) Identification of cooperation-assistive intent (seeking-offering) from short, unstructured messages using a classification model with declarative, social and contrast pattern knowledge, b.) Facilitation of coordination modeling using bipartite matching of complementary intent (seeking-offering), and c.) Identification of user groups to prioritize for engagement by defining a content-driven measure of \u27group discussion divergence\u27. The use of prior knowledge and interplay of features of users, content, and network structures efficiently captures context for computing cooperation-assistive behavior (intent and engagement) from unstructured social data in the online socio-technical systems. Our evaluation of a use-case of the crisis response domain shows improvement in performance for both intent classification and group engagement prioritization. Real world applications of this work include use of the engagement interface tool during various recent crises including the 2014 Jammu and Kashmir floods, and intent classification as a service integrated by the crisis mapping pioneer Ushahidi\u27s CrisisNET project for broader impact

    Predictive dynamic resource allocation for web hosting environments

    Get PDF
    E-Business applications are subject to significant variations in workload and this can cause exceptionally long response times for users, the timing out of client requests and/or the dropping of connections. One solution is to host these applications in virtualised server pools, and to dynamically reassign compute servers between pools to meet the demands on the hosted applications. Switching servers between pools is not without cost, and this must therefore be weighed against possible system gain. This work is concerned with dynamic resource allocation for multi-tiered, clusterbased web hosting environments. Dynamic resource allocation is reactive, that is, when overloading occurs in one resource pool, servers are moved from another (quieter) pool to meet this demand. Switching servers comes with some overhead, so it is important to weigh up the costs of the switch against possible system gains. In this thesis we combine the reactive behaviour of two server switching policies – the Proportional Switching Policy (PSP) and the Bottleneck Aware Switching Policy (BSP) – with the proactive properties of several workload forecasting models. We evaluate the behaviour of the two switching policies and compare them against static resource allocation under a range of reallocation intervals (the time it takes to switch a server from one resource pool to another) and observe that larger reallocation intervals have a negative impact on revenue. We also construct model- and simulation-based environments in which the combination of workload prediction and dynamic server switching can be explored. Several different (but common) predictors – Last Observation (LO), Simple Average (SA), Sample Moving Average (SMA) and Exponential Moving Average (EMA), Low Pass Filter (LPF), and an AutoRegressive Integrated Moving Average (ARIMA) – have been applied alongside the switching policies. As each of the forecasting schemes has its own bias, we also develop a number of meta-forecasting algorithms – the Active Window Model (AWM), the Voting Model (VM), the Selective Model (SM), the Dynamic Active Window Model (DAWM), and a method based on Workload Pattern Analysis (WPA). The schemes are tested with real-world workload traces from several sources to ensure consistent and improved results. We also investigate the effectiveness of these schemes on workloads containing extreme events (e.g. flash crowds). The results show that workload forecasting can be very effective when applied alongside dynamic resource allocation strategies

    Revenue maximization problems in commercial data centers

    Get PDF
    As IT systems are becoming more important everyday, one of the main concerns is that users may face major problems and eventually incur major costs if computing systems do not meet the expected performance requirements: customers expect reliability and performance guarantees, while underperforming systems loose revenues. Even with the adoption of data centers as the hub of IT organizations and provider of business efficiencies the problems are not over because it is extremely difficult for service providers to meet the promised performance guarantees in the face of unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs), contracts that specify a level of performance that must be met and compensations in case of failure. In this thesis I will address some of the performance problems arising when IT companies sell the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim is to improve the efficiency of service provisioning systems by allowing them to adapt to changing demand conditions. First, I will define the problem in terms of an utility function to maximize. Two different models are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then, I will introduce an autonomic model for service provision. The architecture consists of a set of hosted applications that share a certain number of servers. The system collects demand and performance statistics and estimates traffic parameters. These estimates are used by management policies which implement dynamic resource allocation and admission algorithms. Results from a number of experiments show that the performance of these heuristics is close to optimal.EThOS - Electronic Theses Online ServiceQoSP (Quality of Service Provisioning) : British TelecomGBUnited Kingdo
    • …
    corecore