8 research outputs found

    StreamCloud: An elastic and scalable data streaming system

    Get PDF
    Many applications in several domains such as telecommunications, network security, large scale sensor networks, require online processing of continuous data lows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static con?gurations that lead to either under or over-provisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation and a thorough evaluation of the scalability and elasticity of the fully implemented system

    Elasca: Workload-Aware Elastic Scalability for Partition Based Database Systems

    Get PDF
    Providing the ability to increase or decrease allocated resources on demand as the transactional load varies is essential for database management systems (DBMS) deployed on today's computing platforms, such as the cloud. The need to maintain consistency of the database, at very large scales, while providing high performance and reliability makes elasticity particularly challenging. In this thesis, we exploit data partitioning as a way to provide elastic DBMS scalability. We assert that the flexibility provided by a partitioned, shared-nothing parallel DBMS can be used to implement elasticity. Our idea is to start with a small number of servers that manage all the partitions, and to elastically scale out by dynamically adding new servers and redistributing database partitions among these servers as the load varies. Implementing this approach requires (a) efficient mechanisms for addition/removal of servers and migration of partitions, and (b) policies to efficiently determine the optimal placement of partitions on the given servers as well as plans for partition migration. This thesis presents Elasca, a system that implements both these features in an existing shared-nothing DBMS (namely VoltDB) to provide automatic elastic scalability. Elasca consists of a mechanism for enabling elastic scalability, and a workload-aware optimizer for determining optimal partition placement and migration plans. Our optimizer minimizes computing resources required and balances load effectively without compromising system performance, even in the presence of variations in intensity and skew of the load. The results of our experiments show that Elasca is able to achieve performance close to a fully provisioned system while saving 35% resources on average. Furthermore, Elasca's workload-aware optimizer performs up to 79% less data movement than a greedy approach to resource minimization, and also balance load much more effectively

    EdgeX: Edge Replication for Web Applications

    Get PDF
    Global web applications face the problem of high network latency due to their need to communicate with distant data centers. Many applications use edge networks for caching images, CSS, javascript, and other static content in order to avoid some of this network latency. However, for updates and for anything other than static content, communication with the data center is still required, and can dominate application request latencies. One way to address this problem is to push more of the web application, as well the database on which it depends, from the remote data center towards the edge of the network. This thesis presents preliminary work in this direction. Speci cally, it presents an edge-aware dynamic data replication architecture for relational database systems supporting web applications. The objective is to allow dynamic content to be served from the edge of the network, with low latency

    Langage de mashup pour l'intégration autonomique de services de données dans des environements dynamiques

    No full text
    The integration of the information coming from data services or data sources in virtual communities is user centred, i.e. associated to visualization issues determined by user needs. A virtual community can be seen as a cyberspace that can be customized by every user by specifying the information he/she is interested in and the way it should be retrieved and presented respecting specific security and QoS properties. Such requirements must be defined or inferred and then interpreted for building customized visualisations. Nowadays, there is no simple declarative language of mashups for retrieving; integrating and visualizing data produced by data services, according to spatio-temporal specifications. The purpose of this thesis is to develop such a language. This work is done within the framework of the REDSHINE project (red-shine.imag.fr) supported by the Grenoble INP "Bonus QualitĂ© Recherche"Dans les communautĂ©s virtuelles, l'intĂ©gration des informations (provenant de sources ou services de donnĂ©es) est centrĂ©e sur les utilisateurs, i.e., associĂ©e Ă  la visualisation d'informations dĂ©terminĂ©e par les besoins des utilisateurs. Une communautĂ© virtuelle peut donc ĂȘtre vue comme un espace de donnĂ©es personnalisable par chaque utilisateur en spĂ©cifiant quelles sont ses informations d'intĂ©rĂȘt et la façon dont elles doivent ĂȘtre prĂ©sentĂ©es en respectant des contraintes de sĂ©curitĂ© et de qualitĂ© de services (QoS). Ces contraintes sont dĂ©finies ou infĂ©rĂ©es puis interprĂ©tĂ©es pour construire des visualisations personnalisĂ©es. Il n'existe pas aujourd'hui de langage dĂ©claratif simple de mashups pour rĂ©cupĂ©rer, intĂ©grer et visualiser des donnĂ©es fournies par des services selon des spĂ©cifications spatio-temporelles. Dans le cadre de la thĂšse il s'agit de proposer un tel langage. Ce travail est rĂ©alisĂ© dans le cadre du projet Redshine, bĂ©nĂ©ficiant d'un Bonus QualitĂ© Recherche de Grenoble INP

    Workload Management for Data-Intensive Services

    Get PDF
    <p>Data-intensive web services are typically composed of three tiers: i) a display tier that interacts with users and serves rich content to them, ii) a storage tier that stores the user-generated or machine-generated data used to create this content, and iii) an analytics tier that runs data analysis tasks in order to create and optimize new content. Each tier has different workloads and requirements that result in a diverse set of systems being used in modern data-intensive web services.</p><p>Servers are provisioned dynamically in the display tier to ensure that interactive client requests are served as per the latency and throughput requirements. The challenge is not only deciding automatically how many servers to provision but also when to provision them, while ensuring stable system performance and high resource utilization. To address these challenges, we have developed a new control policy for provisioning resources dynamically in coarse-grained units (e.g., adding or removing servers or virtual machines in cloud platforms). Our new policy, called proportional thresholding, converts a user-specified performance target value into a target range in order to account for the relative effect of provisioning a server on the overall workload performance.</p><p>The storage tier is similar to the display tier in some respects, but poses the additional challenge of needing redistribution of stored data when new storage nodes are added or removed. Thus, there will be some delay before the effects of changing a resource allocation will appear. Moreover, redistributing data can cause some interference to the current workload because it uses resources that can otherwise be used for processing requests. We have developed a system, called Elastore, that addresses the new challenges found in the storage tier. Elastore not only coordinates resource allocation and data redistribution to preserve stability during dynamic resource provisioning, but it also finds the best tradeoff between workload interference and data redistribution time.</p><p>The workload in the analytics tier consists of data-parallel workflows that can either be run in a batch fashion or continuously as new data becomes available. Each workflow is composed of smaller units that have producer-consumer relationships based on data. These workflows are often generated from declarative specifications in languages like SQL, so there is a need for a cost-based optimizer that can generate an efficient execution plan for a given workflow. There are a number of challenges when building a cost-based optimizer for data-parallel workflows, which includes characterizing the large execution plan space, developing cost models to estimate the execution costs, and efficiently searching for the best execution plan. We have built two cost-based optimizers: Stubby for batch data-parallel workflows running on MapReduce systems, and Cyclops for continuous data-parallel workflows where the choice of execution system is made a part of the execution plan space.</p><p>We have conducted a comprehensive evaluation that shows the effectiveness of each tier's automated workload management solution.</p>Dissertatio

    Scalable hosting of web applications

    Get PDF
    Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we addressed the question: How to host multi-tiered Web applications in a scalable manner? Scaling up a Web application requires scaling its individual tiers. To this end, various research works have proposed techniques that employ replication or caching solutions at different tiers. However, most of these techniques aim to optimize the performance of individual tiers and not the entire application. A key observation made in our research is that there exists no elixir technique that performs the best for allWeb applications. Effective hosting of a Web application requires careful selection and deployment of several techniques at different tiers. To this end, we present several caching and replication strategies, such as GlobeCBC, GlobeDB and GlobeTP, to improve the scalability of different tiers of a Web application. While these techniques and systems improve the performance of the individual tiers (and eventually the application), an application's administrator is not only interested in the performance of its individual tiers but also in its endto- end performance. To this end, we propose a resource provisioning approach that allows us to choose the best resource configuration for hosting a Web application such that its end-to-end response time can be optimized with minimum usage of resources. The proposed approach is based on an analytical model for multi-tier systems, which allows us to derive expressions for estimating the mean end-to-end response time and its variance.Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor

    Database replication policies for dynamic content applications

    No full text
    The database tier of dynamic content servers at large Internet sites is typically hosted on centralized and expensive hardware. Recently, research prototypes have proposed using database replication on commodity clusters as a more economical scaling solution. In this paper, we propose using database replication to support multiple applications on a shared cluster. Our system dynamically allocates replicas to applications in order to maintain application-level performance in response to either peak loads or failure conditions. This approach allows unifying load and fault management functionality. The main challenge in the design of our system is the time taken to add database replicas. We present replica allocation policies that take this time delay into account and also design an efficient replica addition method that has minimal impact on other applications. We evaluate our dynamic replication system on a commodity cluster with two standard benchmarks: the TPC-W e-commerce benchmark and the RUBIS auction benchmark. Our evaluation shows that dynamic replication requires fewer resources than static partitioning or full overlap replication policies and provides over 90 % latency compliance to each application under a range of load and failure scenarios
    corecore